venue
stringclasses 2
values | paper_content
stringlengths 7.54k
83.7k
| prompt
stringlengths 161
2.5k
| format
stringclasses 5
values | review
stringlengths 293
9.84k
|
---|---|---|---|---|
ICLR | Title
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
Abstract
In the mean field regime, neural networks are appropriately scaled so that as the width tends to infinity, the learning dynamics tends to a nonlinear and nontrivial dynamical limit, known as the mean field limit. This lends a way to study large-width neural networks via analyzing the mean field limit. Recent works have successfully applied such analysis to two-layer networks and provided global convergence guarantees. The extension to multilayer ones however has been a highly challenging puzzle, and little is known about the optimization efficiency in the mean field regime when there are more than two layers. In this work, we prove a global convergence result for unregularized feedforward three-layer networks in the mean field regime. We first develop a rigorous framework to establish the mean field limit of three-layer networks under stochastic gradient descent training. To that end, we propose the idea of a neuronal embedding, which comprises of a fixed probability space that encapsulates neural networks of arbitrary sizes. The identified mean field limit is then used to prove a global convergence guarantee under suitable regularity and convergence mode assumptions, which – unlike previous works on two-layer networks – does not rely critically on convexity. Underlying the result is a universal approximation property, natural of neural networks, which importantly is shown to hold at any finite training time (not necessarily at convergence) via an algebraic topology argument.
1 INTRODUCTION
Interests in the theoretical understanding of the training of neural networks have led to the recent discovery of a new operating regime: the neural network and its learning rates are scaled appropriately, such that as the width tends to infinity, the network admits a limiting learning dynamics in which all parameters evolve nonlinearly with time1. This is known as the mean field (MF) limit (Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019)). The four works Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018) led the first wave of efforts in 2018 and analyzed two-layer neural networks. They established a connection between the network under training and its MF limit. They then used the MF limit to prove that two-layer networks could be trained to find (near) global optima using variants of gradient descent, despite non-convexity (Mei et al. (2018); Chizat & Bach (2018)). The MF limit identified by these works assumes the form of gradient flows in the measure space, which factors out the invariance from the action of a symmetry group on the model. Interestingly, by lifting to the measure space, with a convex loss function (e.g. squared loss), one obtains a limiting optimization problem that is convex (Bengio et al. (2006); Bach (2017)). The analyses of Mei et al. (2018); ∗This paper is a conference submission. We refer to the work Nguyen & Pham (2020) and its companion note Pham & Nguyen (2020) for generalizations as well as other conditions for global convergence in the case of multilayer neural networks. †Department of Mathematics, Stanford University. This work was done in parts while H. T. Pham was at the University of Cambridge. ‡The Voleon Group. This work was done while P.-M. Nguyen was at Stanford University. §The author ordering is randomized. 1This is to be contrasted with another major operating regime (the NTK regime) where parameters essentially do not evolve and the model behaves like a kernel method (Jacot et al. (2018); Chizat et al. (2019); Du et al. (2019); Allen-Zhu et al. (2019); Zou et al. (2018); Lee et al. (2019)).
Chizat & Bach (2018) utilize convexity, although the mechanisms to attain global convergence in these works are more sophisticated than the usual convex optimization setup in Euclidean spaces.
The extension to multilayer networks has enjoyed much less progresses. The works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) argued, heuristically or rigorously, for the existence of a MF limiting behavior under gradient descent training with different assumptions. In fact, it has been argued that the difficulty is not simply technical, but rather conceptual (Nguyen (2019)): for instance, the presence of intermediate layers exhibits multiple symmetry groups with intertwined actions on the model. Convergence to the global optimum of the model under gradientbased optimization has not been established when there are more than two layers.
In this work, we prove a global convergence guarantee for feedforward three-layer networks trained with unregularized stochastic gradient descent (SGD) in the MF regime. After an introduction of the three-layer setup and its MF limit in Section 2, our development proceeds in two main steps:
Step 1 (Theorem 3 in Section 3): We first develop a rigorous framework that describes the MF limit and establishes its connection with a large-width SGD-trained three-layer network. Here we propose the new idea of a neuronal embedding, which comprises of an appropriate non-evolving probability space that encapsulates neural networks of arbitrary sizes. This probability space is in general abstract and is constructed according to the (not necessarily i.i.d.) initialization scheme of the neural network. This idea addresses directly the intertwined action of multiple symmetry groups, which is the aforementioned conceptual obstacle (Nguyen (2019)), thereby covering setups that cannot be handled by formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) (see also Section 5 for a comparison). Our analysis follows the technique from Sznitman (1991); Mei et al. (2018) and gives a quantitative statement: in particular, the MF limit yields a good approximation of the neural network as long as n−1min log nmax 1 independent of the data dimension, where nmin and nmax are the minimum and maximum of the widths.
Step 2 (Theorem 8 in Section 4): We prove that the MF limit, given by our framework, converges to the global optimum under suitable regularity and convergence mode assumptions. Several elements of our proof are inspired by Chizat & Bach (2018); the technique in their work however does not generalize to our three-layer setup. Unlike previous two-layer analyses, we do not exploit convexity; instead we make use of a new element: a universal approximation property. The result turns out to be conceptually new: global convergence can be achieved even when the loss function is non-convex. An important crux of the proof is to show that the universal approximation property holds at any finite training time (but not necessarily at convergence, i.e. at infinite time, since the property may not realistically hold at convergence).
Together these two results imply a positive statement on the optimization efficiency of SGD-trained unregularized feedforward three-layer networks (Corollary 10). Our results can be extended to the general multilayer case – with new ideas on top and significantly more technical works – or used to obtain new global convergence guarantees in the two-layer case (Nguyen & Pham (2020); Pham & Nguyen (2020)). We choose to keep the current paper concise with the three-layer case being a prototypical setup that conveys several of the basic ideas. Complete proofs are presented in appendices.
Notations. K denotes a generic constant that may change from line to line. |·| denotes the absolute value for a scalar and the Euclidean norm for a vector. For an integer n, we let [n] = {1, ..., n}.
2 THREE-LAYER NEURAL NETWORKS AND THE MEAN FIELD LIMIT
2.1 THREE-LAYER NEURAL NETWORK
We consider the following three-layer network at time k ∈ N≥0 that takes as input x ∈ Rd:
ŷ (x;W (k)) = ϕ3 (H3 (x;W (k))) , (1)
H3 (x;W (k)) = 1
n2 n2∑ j2=1 w3 (k, j2)ϕ2 (H2 (x, j2;W (k))) ,
H2 (x, j2;W (k)) = 1
n1 n1∑ j1=1 w2 (k, j1, j2)ϕ1 (〈w1 (k, j1) , x〉) ,
in which W (k) = (w1 (k, ·) ,w2 (k, ·, ·) ,w3 (k, ·)) consists of the weights2 w1 (k, j1) ∈ Rd, w2 (k, j1, j2) ∈ R and w3 (k, j2) ∈ R. Here ϕ1 : R → R, ϕ2 : R → R and ϕ3 : R → R are the activation functions, and the network has widths {n1, n2}. We train the network with SGD w.r.t. the loss L : R × R → R≥0. We assume that at each time k, we draw independently a fresh sample z (k) = (x (k) , y (k)) ∈ Rd ×R from a training distribution P . Given an initialization W (0), we update W (k) according to
w3 (k + 1, j2) = w3 (k, j2)− ξ3 (k ) Grad3 (z (k) , j2;W (k)) , w2 (k + 1, j1, j2) = w2 (k, j1, j2)− ξ2 (k ) Grad2 (z (k) , j1, j2;W (k)) ,
w1 (k + 1, j1) = w1 (k, j1)− ξ1 (k ) Grad1 (z (k) , j1;W (k)) ,
in which j1 = 1, ..., n1, j2 = 1, ..., n2, ∈ R>0 is the learning rate, ξi : R≥0 7→ R≥0 is the learning rate schedule for wi, and for z = (x, y), we define
Grad3 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))ϕ2 (H2 (x, j2;W (k))) , Grad2 (z, j1, j2;W (k)) = ∆ H 2 (z, j2;W (k))ϕ1 (〈w1 (k, j1) , x〉) ,
Grad1 (z, j1;W (k)) =
( 1
n2 n2∑ j2=1 ∆H2 (z, j2;W (k))w2 (k, j1, j2) ) ϕ′1 (〈w1 (k, j1) , x〉)x,
∆H2 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))w3 (k, j2)ϕ′2 (H2 (x, j2;W (k))) .
We note that this setup follows the same scaling w.r.t. n1 and n2, which is applied to both the forward pass and the learning rates in the backward pass, as Nguyen (2019).
2.2 MEAN FIELD LIMIT
The MF limit is a continuous-time infinite-width analog of the neural network under training. To describe it, we first introduce the concept of a neuronal ensemble. Given a product probability space (Ω,F , P ) = (Ω1 × Ω2,F1 ×F1, P1 × P2), we independently sample Ci ∼ Pi, i = 1, 2. In the following, we use ECi to denote the expectation w.r.t. the random variable Ci ∼ Pi and ci to denote an arbitrary point ci ∈ Ωi. The space (Ω,F , P ) is referred to as a neuronal ensemble. Given a neuronal ensemble (Ω,F , P ), the MF limit is described by a time-evolving system with state/parameter W (t), where the time t ∈ R≥0 and W (t) = (w1 (t, ·) , w2 (t, ·, ·) , w3 (t, ·)) with w1 : R≥0 × Ω1 → Rd, w2 : R≥0 × Ω1 × Ω2 → R and w3 : R≥0 × Ω2 → R. It entails the quantities:
ŷ (x;W (t)) = ϕ3 (H3 (x;W (t))) ,
H3 (x;W (t)) = EC2 [w3 (t, C2)ϕ2 (H2 (x,C2;W (t)))] , H2 (x, c2;W (t)) = EC1 [w2 (t, C1, c2)ϕ1 (〈w1 (t, C1) , x〉)] .
Here for each t ∈ R≥0, w1 (t, ·) is (Ω1,F1)-measurable, and similar for w2 (t, ·, ·), w3 (t, ·). The MF limit evolves according to a continuous-time dynamics, described by a system of ODEs, which we refer to as the MF ODEs. Specifically, given an initialization W (0) = (w1 (0, ·) , w2 (0, ·, ·) , w3 (0, ·)), the dynamics solves:
∂tw3 (t, c2) = −ξ3 (t) ∆3 (c2;W (t)) , ∂tw2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W (t)) ,
∂tw1 (t, c1) = −ξ1 (t) ∆1 (c1;W (t)) .
Here c1 ∈ Ω1, c2 ∈ Ω2, EZ denotes the expectation w.r.t. the data Z = (X,Y ) ∼ P , and for z = (x, y), we define
∆3 (c2;W (t)) = EZ [∂2L (Y, ŷ (X;W (t)))ϕ′3 (H3 (X;W (t)))ϕ2 (H2 (X, c2;W (t)))] , 2To absorb first layer’s bias term to w1, we assume the input x to have 1 appended to the last entry.
∆2 (c1, c2;W (t)) = EZ [ ∆H2 (Z, c2;W (t))ϕ1 (〈w1 (t, c1) , X〉) ] ,
∆1 (c1;W (t)) = EZ [ EC2 [ ∆H2 (Z,C2;W (t))w2 (t, c1, C2) ] ϕ′1 (〈w1 (t, c1) , X〉)X ] ,
∆H2 (z, c2;W (t)) = ∂2L (y, ŷ (x;W (t)))ϕ′3 (H3 (x;W (t)))w3 (t, c2)ϕ′2 (H2 (x, c2;W (t))) .
In Appendix B, we show well-posedness of MF ODEs under the following regularity conditions.
Assumption 1 (Regularity). We assume that ϕ1 and ϕ2 are K-bounded, ϕ′1, ϕ′2 and ϕ′3 are Kbounded and K-Lipschitz, ϕ′2 and ϕ ′ 3 are non-zero everywhere, ∂2L (·, ·) is K-Lipschitz in the second variable and K-bounded, and |X| ≤ K with probability 1. Furthermore ξ1, ξ2 and ξ3 are K-bounded and K-Lipschitz.
Theorem 1. Under Assumption 1, given any neuronal ensemble and an initialization W (0) such that3 ess-sup |w2 (0, C1, C2)| , ess-sup |w3 (0, C2)| ≤ K, there exists a unique solution W to the MF ODEs on t ∈ [0,∞).
An example of a suitable setup is ϕ1 = ϕ2 = tanh, ϕ3 is the identity, L is the Huber loss, although a non-convex sufficiently smooth loss function suffices. In fact, all of our developments can be easily modified to treat the squared loss with an additional assumption |Y | ≤ K with probability 1. So far, given an arbitrary neuronal ensemble (Ω,F , P ), for each initialization W (0), we have defined a MF limit W (t). The connection with the neural network’s dynamics W (k) is established in the next section.
3 CONNECTION BETWEEN NEURAL NETWORK AND ITS MEAN FIELD LIMIT
3.1 NEURONAL EMBEDDING AND THE COUPLING PROCEDURE
To formalize a connection between the neural network and its MF limit, we consider their initializations. In practical scenarios, to set the initial parameters W (0) of the neural network, one typically randomizes W (0) according to some distributional law ρ. We note that since the neural network is defined w.r.t. a set of finite integers {n1, n2}, so is ρ. We consider a family Init of initialization laws, each of which is indexed by the set {n1, n2}:
Init = {ρ : ρ is the initialization law of a neural network of size {n1, n2} , n1, n2 ∈ N>0}.
This is helpful when one is to take a limit that sends n1, n2 → ∞, in which case the size of this family |Init| is infinite. More generally we allow |Init| <∞ (for example, Init contains a single law ρ of a network of size {n1, n2} and hence |Init| = 1). We make the following crucial definition. Definition 2. Given a family of initialization laws Init, we call (Ω,F , P, { w0i } i=1,2,3
) a neuronal embedding of Init if the following holds:
1. (Ω,F , P ) = (Ω1 × Ω2,F1 ×F2, P1 × P2) a product measurable space. As a reminder, we call it a neuronal ensemble.
2. The deterministic functions w01 : Ω1 → Rd, w02 : Ω1 × Ω2 → R and w03 : Ω2 → R are such that, for each index {n1, n2} of Init and the law ρ of this index, if — with an abuse of notations — we independently sample {Ci (ji)}ji∈[ni] ∼ Pi i.i.d. for each i = 1, 2, then
Law ( w01 (C1 (j1)) , w 0 2 (C1(j1), C2 (j2)) , w 0 3 (C2(j2)) , ji ∈ [ni] , i = 1, 2 ) = ρ.
To proceed, given Init and {n1, n2} in its index set, we perform the following coupling procedure:
1. Let (Ω,F , P, { w0i } i=1,2,3 ) be a neuronal embedding of Init.
2. We form the MF limit W (t) (for t ∈ R≥0) associated with the neuronal ensemble (Ω,F , P ) by setting the initialization W (0) to w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·) and running the MF ODEs described in Section 2.2.
3We recall the definition of ess-sup in Appendix A.
3. We independently sample Ci (ji) ∼ Pi for i = 1, 2 and ji = 1, ..., ni. We then form the neural network initialization W (0) with w1 (0, j1) = w01 (C1 (j1)), w2 (0, j1, j2) = w02 (C1 (j1) , C2 (j2)) and w3 (0, j2) = w 0 3 (C2 (j2)) for j1 ∈ [n1], j2 ∈ [n2]. We obtain
the network’s trajectory W (k) for k ∈ N≥0 as in Section 2.1, with the data z (k) generated independently of {Ci (ji)}i=1,2 and hence W (0).
We can then define a measure of closeness between W (bt/ c) and W (t) for t ∈ [0, T ]: DT (W,W) = sup { |w1 (bt/ c , j1)− w1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w2 (t, C1 (j1) , C2 (j2))| , |w3 (bt/ c , j2)− w3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } . (2)
Note that W (t) is a deterministic trajectory independent of {n1, n2}, whereas W (k) is random for all k ∈ N≥0 due to the randomness of {Ci (ji)}i=1,2 and the generation of the training data z (k). Similarly DT (W,W) is a random quantity.
The idea of the coupling procedure is closely related to the coupling argument in Sznitman (1991); Mei et al. (2018). Here, instead of playing the role of a proof technique, the coupling serves as a vehicle to establish the connection betweenW and W on the basis of the neuronal embedding. This connection is shown in Theorem 3 below, which gives an upper bound on DT (W,W).
We note that the coupling procedure can be carried out to provide a connection between W and W as long as there exists a neuronal embedding for Init. Later in Section 4.1, we show that for a common initialization scheme (in particular, i.i.d. initialization) for Init, there exists a neuronal embedding. Theorem 3 applies to, but is not restricted to, this initialization scheme.
3.2 MAIN RESULT: APPROXIMATION BY THE MF LIMIT
Assumption 2 (Initialization of second and third layers). We assume that ess-sup ∣∣w02 (C1, C2)∣∣,
ess-sup ∣∣w03 (C2)∣∣ ≤ K, where w02 and w03 are as described in Definition 2.
Theorem 3. Given a family Init of initialization laws and a tuple {n1, n2} that is in the index set of Init, perform the coupling procedure as described in Section 3.1. Fix a terminal time T ∈ N≥0. Under Assumptions 1 and 2, for ≤ 1, we have with probability at least 1− 2δ,
DT (W,W) ≤ eKT (
1 √ nmin + √
) log1/2 ( 3 (T + 1)n2max
δ + e
) ≡ errδ,T ( , n1, n2) ,
in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) .
The theorem gives a connection between W (bt/ c), which is defined upon finite widths n1 and n2, and the MF limit W (t), whose description is independent of n1 and n2. It lends a way to extract properties of the neural network in the large-width regime. Corollary 4. Under the same setting as Theorem 3, consider any test function ψ : R × R → R which is K-Lipschitz in the second variable uniformly in the first variable (an example of ψ is the loss L). For any δ > 0, with probability at least 1− 3δ,
sup t≤T |EZ [ψ (Y, ŷ (X;W (bt/ c)))]− EZ [ψ (Y, ŷ (X;W (t)))]| ≤ eKT errδ,T ( , n1, n2) .
These bounds hold for any n1 and n2, similar to Mei et al. (2018); Araújo et al. (2019), in contrast with non-quantitative results in Chizat & Bach (2018); Sirignano & Spiliopoulos (2019). These bounds suggest that n1 and n2 can be chosen independent of the data dimension d. This agrees with the experiments in Nguyen (2019), which found width ≈ 1000 to be typically sufficient to observe MF behaviors in networks trained with real-life high-dimensional data.
We observe that the MF trajectory W (t) is defined as per the choice of the neuronal embedding (Ω,F , P, { w0i } i=1,2,3
), which may not be unique. On the other hand, the neural network’s trajectory W (k) depends on the randomization of the initial parameters W (0) according to an initialization law from the family Init (as well as the data z (k)) and hence is independent of this choice. Another corollary of Theorem 3 is that given the same family Init, the law of the MF trajectory is insensitive to the choice of the neuronal embedding of Init.
Corollary 5. Consider a family Init of initialization laws, indexed by a set of tuples {m1,m2} that contains a sequence of indices {m1 (m) ,m2 (m) : m ∈ N} in which as m → ∞, min {m1 (m) ,m2 (m)}−1 log (max {m1 (m) ,m2 (m)}) → 0. Let W (t) and Ŵ (t) be two MF trajectories associated with two choices of neuronal embeddings of Init, (Ω,F , P, { w0i } i=1,2,3 ) and
(Ω̂, F̂ , P̂ , { ŵ0i } i=1,2,3
). The following statement holds for any T ≥ 0 and any two positive integers n1 and n2: if we independently sample Ci (ji) ∼ Pi and Ĉi (ji) ∼ P̂i for ji ∈ [ni], i = 1, 2, then Law (W (n1, n2, T )) = Law(Ŵ (n1, n2, T )), where we define W (n1, n2, T ) as the below collection w.r.t. W (t), and similarly define Ŵ (n1, n2, T ) w.r.t. Ŵ (t):
W (n1, n2, T ) = { w1 (t, C1 (j1)) , w2 (t, C1 (j1) , C2 (j2)) , w3 (t, C2 (j2)) :
j1 ∈ [n1] , j2 ∈ [n2] , t ∈ [0, T ] } .
The proofs are deferred to Appendix C.
4 CONVERGENCE TO GLOBAL OPTIMA
In this section, we prove a global convergence guarantee for three-layer neural networks via the MF limit. We consider a common class of initialization: i.i.d. initialization.
4.1 I.I.D. INITIALIZATION
Definition 6. An initialization law ρ for a neural network of size {n1, n2} is called ( ρ1, ρ2, ρ3 ) - i.i.d. initialization (or i.i.d. initialization, for brevity), where ρ1, ρ2 and ρ3 are probability measures over Rd, R and R respectively, if {w1 (0, j1)}j1∈[n1] are generated i.i.d. according to ρ
1, {w2 (0, j1, j2)}j1∈[n1], j2∈[n2] are generated i.i.d. according to ρ
2 and {w3 (0, j2)}j2∈[n2] are generated i.i.d. according to ρ3, and w1, w2 and w3 are independent.
Observe that given ( ρ1, ρ2, ρ3 ) , one can build a family Init of i.i.d. initialization laws that contains any index set {n1, n2}. Furthermore i.i.d. initializations are supported by our framework, as stated in the following proposition and proven in Appendix D.
Proposition 7. There exists a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) for any family Init of
initialization laws, which are ( ρ1, ρ2, ρ3 ) -i.i.d.
4.2 MAIN RESULT: GLOBAL CONVERGENCE
To measure the learning quality, we consider the loss averaged over the data Z ∼ P:
L (V ) = EZ [L (Y, ŷ (X;V ))] ,
where V = (v1, v2, v3) is a set of three measurable functions v1 : Ω1 → Rd, v2 : Ω1 × Ω2 → R, v3 : Ω2 → R.
Assumption 3. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of the ( ρ1, ρ2, ρ3 ) -i.i.d. initialization, and the associated MF limit with initialization W (0) such that w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·). Assume:
1. Support: The support of ρ1 is Rd.
2. Convergence mode: There exist limits w̄1, w̄2 and w̄3 such that as t→∞,
E [(1 + |w̄3(C2)|) |w̄3(C2)| |w̄2(C1, C2)| |w1(t, C1)− w̄1(C1)|]→ 0, (3) E [(1 + |w̄3(C2)|) |w̄3(C2)| |w2(t, C1, C2)− w̄2(C1, C2)|]→ 0, (4)
E [(1 + |w̄3(C2)|) |w3(t, C2)− w̄3(C2)|]→ 0, (5) ess-supEC2 [|∂tw2 (t, C1, C2)|]→ 0. (6)
3. Universal approximation: { ϕ1 (〈u, ·〉) : u ∈ Rd } has dense span in L2 (PX) (the space
of square integrable functions w.r.t. PX the distribution of the input X).
Assumption 3 is inspired by the work Chizat & Bach (2018) on two-layer networks, with certain differences. Assumptions 3.1 and 3.3 are natural in neural network learning (Cybenko (1989); Chen & Chen (1995)), while we note Chizat & Bach (2018) does not utilize universal approximation. Similar to Chizat & Bach (2018), Assumption 3.2 is technical and does not seem removable. Note that this assumption specifies the mode of convergence and is not an assumption on the limits w̄1, w̄2 and w̄3. Specifically conditions (3)-(5) are similar to the convergence assumption in Chizat & Bach (2018). We differ from Chizat & Bach (2018) fundamentally in the essential supremum condition (6). On one hand, this condition helps avoid the Morse-Sard type condition in Chizat & Bach (2018), which is difficult to verify in general and not simple to generalize to the three-layer case. On the other hand, it turns out to be a natural assumption to make, in light of Remark 9 below.
We now state the main result of the section. The proof is in Appendix D. Theorem 8. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of ( ρ1, ρ2, ρ3 ) -i.i.d. initialization. Consider the MF limit corresponding to the network (1), such that they are coupled together by the coupling procedure in Section 3.1, under Assumptions 1, 2 and 3. For simplicity, assume ξ1 (·) = ξ2 (·) = 1. Further assume either:
• (untrained third layer) ξ3 (·) = 0 and w03 (C2) 6= 0 with a positive probability, or • (trained third layer) ξ3 (·) = 1 and L ( w01, w 0 2, w 0 3 ) < EZ [L (Y, ϕ3 (0))].
Then the following hold:
• Case 1 (convex loss): If L is convex in the second variable, then
lim t→∞ L (W (t)) = inf V L (V ) = inf ỹ: Rd→R
EZ [L (Y, ỹ (X))] .
• Case 2 (generic non-negative loss): Suppose that ∂2L (y, ŷ) = 0 implies L (y, ŷ) = 0. If y = y(x) is a function of x, then L (W (t))→ 0 as t→∞.
Remarkably here the theorem allows for non-convex losses. A further inspection of the proof shows that no convexity-based property is used in Case 2 (see, for instance, the high-level proof sketch in Section 4.3); in Case 1, the key steps in the proof are the same, and the convexity of the loss function serves as a convenient technical assumption to handle the arbitrary extra randomness of Y conditional on X . We also remark that the same proof of global convergence should extend beyond the specific fully-connected architecture considered here. Similar to previous results on SGD-trained two-layer networks Mei et al. (2018); Chizat & Bach (2018), our current result in the three-layer case is non-quantitative. Remark 9. Interestingly there is a converse relation between global convergence and the essential supremum condition (6): under the same setting, global convergence is unattainable if condition (6) does not hold. A similar observation was made in Wojtowytsch (2020) for two-layer ReLU networks. A precise statement and its proof can be found in Appendix E.
The following result is straightforward from Theorem 8 and Corollary 4, establishing the optimization efficiency of the neural network with SGD. Corollary 10. Consider the neural network (1). Under the same setting as Theorem 8, in Case 1,
lim t→∞ lim n1,n2 lim →0 EZ [L (Y, ŷ (X;W (bt/ c)))] = inf f1,f2,f3 L (f1, f2, f3) = inf ỹ EZ [L (Y, ỹ (X))]
in probability, where the limit of the widths is such that min {n1, n2}−1 log (max {n1, n2}) → 0. In Case 2, the same holds with the right-hand side being 0.
4.3 HIGH-LEVEL IDEA OF THE PROOF
We give a high-level discussion of the proof. This is meant to provide intuitions and explain the technical crux, so our discussion may simplify and deviate from the actual proof.
Our first insight is to look at the second layer’s weight w2. At convergence time t = ∞, we expect to have zero movement and hence, denoting W (∞) = (w̄1, w̄2, w̄3):
∆2 (c1, c2;W (∞)) = EZ [ ∆H2 (Z, c2;W (∞))ϕ1 (〈w̄1 (c1) , X〉) ] = 0,
for P -almost every c1, c2. Suppose for the moment that we are allowed to make an additional (strong) assumption on the limit w̄1: supp (w̄1 (C1)) = Rd. It implies that the universal approximation property, described in Assumption 3, holds at t = ∞; more specifically, it implies {ϕ1 (〈w̄1 (c1) , ·〉) : c1 ∈ Ω1} has dense span in L2 (PX). This thus yields
EZ [ ∆H2 (Z, c2;W (∞)) ∣∣X = x] = 0, for P-almost every x. Recalling the definition of ∆H2 , one can then easily show that
EZ [∂2L (Y, ŷ (X;W (∞)))|X = x] = 0.
Global convergence follows immediately; for example, in Case 2 of Theorem 8, this is equivalent to that ∂2L (y (x) , ŷ (x;W (∞))) = 0 and hence L (y (x) , ŷ (x;W (∞))) = 0 for P-almost every x. In short, the gradient flow structure of the dynamics of w2 provides a seamless way to obtain global convergence. Furthermore there is no critical reliance on convexity.
However this plan of attack has a potential flaw in the strong assumption that supp (w̄1 (C1)) = Rd, i.e. the universal approximation property holds at convergence time. Indeed there are setups where it is desirable that supp (w̄1 (C1)) 6= Rd (Mei et al. (2018); Chizat (2019)); for instance, it is the case where the neural network is to learn some “sparse and spiky” solution, and hence the weight distribution at convergence time, if successfully trained, cannot have full support. On the other hand, one can entirely expect that if supp (w1 (0, C1)) = Rd initially at t = 0, then supp (w1 (t, C1)) = Rd at any finite t ≥ 0. The crux of our proof is to show the latter without assuming supp (w̄1 (C1)) = Rd.
This task is the more major technical step of the proof. To that end, we first show that there exists a mapping (t, u) 7→ M (t, u) that maps from (t, w1 (0, c1)) = (t, u) to w1 (t, c1) via a careful measurability argument. This argument rests on a scheme that exploits the symmetry in the network evolution. Furthermore the map M is shown to be continuous. The desired conclusion then follows from an algebraic topology argument that the map M preserves a homotopic structure through time.
5 DISCUSSION
The MF literature is fairly recent. A long line of works (Nitanda & Suzuki (2017); Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Wei et al. (2019); Javanmard et al. (2019); Mei et al. (2019); Shevchenko & Mondelli (2019); Wojtowytsch (2020)) have focused mainly on two-layer neural networks, taking an interacting particle system approach to describe the MF limiting dynamics as Wasserstein gradient flows. The three works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) independently develop different formulations for the MF limit in multilayer neural networks, under different assumptions. These works take perspectives that are different from ours. In particular, while the central object in Nguyen (2019) is a new abstract representation of each individual neuron, our neuronal embedding idea instead takes a keen view on a whole ensemble of neurons. Likewise our idea is also distant from Araújo et al. (2019); Sirignano & Spiliopoulos (2019): the central objects in Araújo et al. (2019) are paths over the weights across layers; those in Sirignano & Spiliopoulos (2019) are time-dependent functions of the initialization, which are simplified upon i.i.d. initializations.
The result of our perspective is a neuronal embedding framework that allows one to describe the MF limit in a clean and rigorous manner. In particular, it avoids extra assumptions made in Araújo et al. (2019); Sirignano & Spiliopoulos (2019): unlike our work, Araújo et al. (2019) assumes untrained first and last layers and requires non-trivial technical tools; Sirignano & Spiliopoulos (2019) takes an unnatural sequential limit n1 → ∞ before n2 → ∞ and proves a non-quantitative result, unlike Theorem 3 which only requires sufficiently large min {n1, n2}. We note that Theorem 3 can be extended to general multilayer networks using the neuronal embedding idea. The advantages of our framework come from the fact that while MF formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) are specific to and exploit i.i.d. initializations, our formulation does not. Remarkably as shown in Araújo et al. (2019), when there are more than three layers and no biases,
i.i.d. initializations lead to a certain simplifying effect on the MF limit. On the other hand, our framework supports non-i.i.d. initializations which avoid the simplifying effect, as long as there exist suitable neuronal embeddings (Nguyen & Pham (2020)). Although our global convergence result in Theorem 8 is proven in the context of i.i.d. initializations for three-layer networks, in the general multilayer case, it turns out that the use of a special type of non-i.i.d. initialization allows one to prove a global convergence guarantee (Pham & Nguyen (2020)).
In this aspect, our framework follows closely the spirit of the work Nguyen (2019), whose MF formulation is also not specific to i.i.d. initializations. Yet though similar in the spirit, Nguyen (2019) develops a heuristic formalism and does not prove global convergence.
Global convergence in the two-layer case with convex losses has enjoyed multiple efforts with a lot of new and interesting results (Mei et al. (2018); Chizat & Bach (2018); Javanmard et al. (2019); Rotskoff et al. (2019); Wei et al. (2019)). Our work is the first to establish a global convergence guarantee for SGD-trained three-layer networks in the MF regime. Our proof sends a new message that the crucial factor is not necessarily convexity, but rather that the whole learning trajectory maintains the universal approximation property of the function class represented by the first layer’s neurons, together with the gradient flow structure of the second layer’s weights. As a remark, our approach can also be applied to prove a similar global convergence guarantee for two-layer networks, removing the convex loss assumption in previous works (Nguyen & Pham (2020)). The recent work Lu et al. (2020) on a MF resnet model (a composition of many two-layer MF networks) and a recent update of Sirignano & Spiliopoulos (2019) essentially establish conditions of stationary points to be global optima. They however require strong assumptions on the support of the limit point. As explained in Section 4.3, we analyze the training dynamics without such assumption and in fact allow it to be violated.
Our global convergence result is non-quantitative. An important, highly challenging future direction is to develop a quantitative version of global convergence; previous works on two-layer networks Javanmard et al. (2019); Wei et al. (2019); Rotskoff et al. (2019); Chizat (2019) have done so under sophisticated modifications of the architecture and training algorithms.
Finally we remark that our insights here can be applied to prove similar global convergence guarantees and derive other sufficient conditions for global convergence of two-layer or multilayer networks (Nguyen & Pham (2020); Pham & Nguyen (2020)).
ACKNOWLEDGEMENT
H. T. Pham would like to thank Jan Vondrak for many helpful discussions and in particular for the shorter proof of Lemma 19. We would like to thank Andrea Montanari for the succinct description of the difficulty in extending the mean field formulation to the multilayer case, in that there are multiple symmetry group actions in a multilayer network.
A NOTATIONAL PRELIMINARIES
For a real-valued random variable Z defined on a probability space (Ω,F , P ), we recall
ess-supZ = inf {z ∈ R : P (Z > z) = 0} .
We also introduce some convenient definitions which we use throughout the appendices. For a set of neural network’s parameter W, we define
9W9T = max {
max j1≤n1, j2≤n2 sup t≤T |w2 (bt/ c , j1, j2)| , max j2≤n2 sup t≤T |w3 (bt/ c , j2)|
} .
Similarly for a set of MF parameters W , we define: 9W9T = max {
ess-sup sup t≤T |w2 (t, C1, C2)| , ess-sup sup t≤T |w3 (t, C2)|
} .
For two sets of neural network’s parameters W′,W′′, we define their distance: ‖W′ −W′′‖T = sup { |w′1 (bt/ c , j1)−w′′1 (bt/ c , j1)| , |w′2 (bt/ c , j1, j2)−w′′2 (bt/ c , j1, j2)| , |w′3 (bt/ c , j2)−w′′3 (bt/ c , j2)| : t ∈ [0, T ] , j1 ∈ [n1] , j2 ∈ [n2] } .
Similarly for two sets of MF parameters W ′,W ′′, we define their distance:
‖W ′ −W ′′‖T = ess-sup sup t∈[0,T ]
{ |w′1 (t, C1)− w′′1 (t, C1)| , |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| ,
|w′3 (t, C2)− w′′3 (t, C2)| } .
B EXISTENCE AND UNIQUENESS OF THE SOLUTION TO MF ODES
We first collect some a priori estimates.
Lemma 11. Under Assumption 1, consider a solution W to the MF ODEs with initialization W (0) such that 9W90 < ∞. If this solution exists, it satisfies the following a priori bounds, for any T ≥ 0:
ess-sup sup t≤T |w3 (t, C2)| ≤ 9W 90 +KT ≡ 9W 90 +K0,3 (T ) ,
ess-sup sup t≤T |w2 (t, C1, C2)| ≤ 9W 90 +KTK0,3 (T ) ≡ 9W 90 +K0,2 (T ) ,
and consequently, 9W9T ≤ 1 + max {K0,2 (T ) , K0,3 (T )} .
Proof. The bounds can be obtained easily by bounding the respective initializations and update quantities separately. In particular,
ess-sup sup t≤T |w3 (t, C2)| ≤ ess-sup |w3 (0, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw3 (t, C2) ∣∣∣∣ ≤ 9W 90 +KT,
ess-sup sup t≤T |w2 (t, C1, C2)| ≤ ess-sup |w2 (0, C1, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw2 (t, C1, C2) ∣∣∣∣
≤ ess-sup |w2 (0, C1, C2)|+KT ess-sup sup t≤T |w3 (t, C2)|
≤ 9W 90 +KTK0,3 (T ) .
Inspired by the a priori bounds in Lemma 11, given an arbitrary terminal time T and the initialization W (0), let us consider:
• for a tuple (a, b) ∈ R2≥0, a space WT (a, b) of W ′ = (W ′ (t))t≤T = (w′1 (t, ·) , w′2 (t, ·, ·) , w′3 (t, ·))t≤T such that
ess-sup sup t≤T |w′3 (t, C2)| ≤ b, ess-sup sup t≤T |w′2 (t, C1, C2)| ≤ a,
where w′1 : R≥0 × Ω1 → Rd, w′2 : R≥0 × Ω1 × Ω2 7→ R, w′3 : R≥0 × Ω3 7→ R, • for a tuple (a, b) ∈ R2≥0 and W (0), a space W + T (a, b,W (0)) of W
′ ∈ WT (a, b) such that W ′ (0) = W (0) additionally (and hence every W ′ in this space shares the same initialization W (0)).
We equip the spaces with the metric ‖W ′ −W ′′‖T . It is easy to see that both spaces are complete. Note that Lemma 11 implies, under Assumption 1 and 9W90 < ∞, we have any MF solution W , if exists, is inWT (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T )). For the proof of Theorem 1, we work mainly with W+T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)), although several intermediate lemmas are proven in more generality for other uses.
Lemma 12. Under Assumption 1, for T ≥ 0, any W ′,W ′′ ∈ WT (a, b) and almost every z ∼ P:
ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ Ka,b, ess-sup sup
t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T ,
sup t≤T |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T ,
sup t≤T |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ Ka,b ‖W ′ −W ′′‖T ,
ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣ ≤ Ka,b ‖W ′ −W ′′‖T , where Ka,b ≥ 1 is a generic constant that grows polynomially with a and b.
Proof. The first bound is easy to see:
ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ ess-sup sup t≤T |w′3 (t, C2)| ≤ b.
We prove the second bound, invoking Assumption 1:
|H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K |w′2 (t, C1, C2)| |ϕ1 (〈w′1 (t, C1) , x〉)− ϕ1 (〈w′′1 (t, C1) , x〉)|
+K |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| ≤ K (|w′2 (t, C1, C2)|+ 1) ‖W ′ −W ′′‖T ,
which yields by the fact W ′ ∈ WT (a, b):
ess-sup sup t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K (a+ 1) ‖W ′ −W ′′‖T .
Consequently, we have:
|H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ K |w′3 (t, C2)| |ϕ2 (H2 (x,C2;W ′ (t)))− ϕ2 (H2 (x,C2;W ′′ (t)))| +K |w′3 (t, C2)− w′′3 (t, C2)| ≤ K |w′3 (t, C2)| |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))|
+K ‖W ′ −W ′′‖T , |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ K |ŷ (x;W ′ (t))− ŷ (x;W ′′ (t))|
≤ K |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ,
which then yield the third and fourth bounds by the fact W ′,W ′′ ∈ WT (a, b). Using these bounds, we obtain the last bound:∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣
≤ K |w′3 (t, C2)| ( |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))|
+ |H3 (x;W ′ (t))−H3 (x;W ′′ (t))|+ |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| )
+K |w′3 (t, C2)− w′′3 (t, C2)| ,
from which the last bound follows.
To prove Theorem 1, for a given W (0), we define a mapping FW (0) that maps from W ′ = (w′1, w ′ 2, w ′ 3) ∈ WT (a, b) to FW (0) (W ′) = W̄ ′ = (w̄′1, w̄′2, w̄′3), defined by W̄ ′ (0) = W (0) and
∂ ∂t w̄′3 (t, c2) = −ξ3 (t) ∆3 (c2;W ′ (t)) ,
∂ ∂t w̄′2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W ′ (t)) ,
∂ ∂t w̄′1 (t, c1) = −ξ1 (t) ∆1 (c1;W ′ (t)) .
Notice that the right-hand sides do not involve W̄ ′. Note that the MF ODEs’ solution, initialized at W (0), is a fixed point of this mapping.
We establish the following estimates for this mapping.
Lemma 13. Under Assumption 1, for T ≥ 0, any initialization W (0) and any W ′,W ′′ ∈ WT (a, b),
ess-sup sup s≤t |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t ,
ess-sup sup s≤t |∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t ,
ess-sup sup s≤t |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t ,
and consequently, if in addition W ′ (0) = W ′′ (0) (not necessarily equal W (0)), then
ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds,
ess-sup sup t≤T |w̄′2 (t, C1, C2)− w̄′′2 (t, C1, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds,
ess-sup sup t≤T |w̄′1 (t, C1)− w̄′′1 (t, C1)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds,
in which W̄ ′ = (w̄′1, w̄ ′ 2, w̄ ′ 3) = FW (0) (W ′), W̄ ′′ = (w̄′′1 , w̄ ′′ 2 , w̄ ′′ 3 ) = FW (0) (W ′′) and Ka,b ≥ 1 is a generic constant that grows polynomially with a and b.
Proof. From Assumption 1 and the fact W ′,W ′′ ∈ WT (a, b), we get:
|∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ KEZ [|∂2L (Y, ŷ (X;W ′ (s)))− ∂2L (Y, ŷ (X;W ′′ (s)))|] +KEZ [|H3 (X;W ′ (s))−H3 (X;W ′′ (s))|] +KEZ [|H2 (X,C2;W ′ (s))−H2 (X,C2;W ′′ (s))|] ,
|∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b |w′1 (s, C1)− w′′1 (s, C1)| +K ∣∣EZ [∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))]∣∣ , |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,bEZ [∣∣∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))∣∣]
+Ka,b |w′2 (s, C1, C2)− w′′2 (s, C1, C2)| +Ka,b |w′1 (s, C1)− w′′1 (s, C1)| ,
from which the first three estimates then follow, in light of Lemma 12. The last three estimates then follow from the fact that W̄ ′ (0) = W̄ ′′ (0) and Assumption 1; for instance,
ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ ∫ T 0 ess-sup ∣∣∣∣ ∂∂tw̄′3 (s, C2)− ∂∂tw̄′′3 (s, C2) ∣∣∣∣ ds ≤ K
∫ T 0 ess-sup |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ds.
We are now ready to prove Theorem 1.
Proof of Theorem 1. We will use a Picard-type iteration. To lighten notations:
W+T ≡ W + T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)) , F ≡ FW (0).
Since 9W90 ≤ K by assumption, we have 9W 90 +K0,2 (T ) +K0,3 (T ) ≤ KT . Recall thatW+T is complete. For an arbitrary T > 0, consider W ′,W ′′ ∈ W+T . Lemma 13 yields:
‖F (W ′)− F (W ′′)‖T ≤ KT ∫ T
0
‖W ′ −W ′′‖s ds.
Note that F maps toW+T under Assumption 1 by the same argument as Lemma 11. Hence we are allowed to iterating this inequality and get, for an arbitrary T > 0,∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥
T ≤ KT ∫ T 0 ∥∥∥F (k−1) (W ′)− F (k−1) (W ′′)∥∥∥ T2 dT2
≤ K2T ∫ T
0 ∫ T2 0 ∥∥∥F (k−2) (W ′)− F (k−2) (W ′′)∥∥∥ T3 I (T2 ≤ T ) dT3dT2
... ≤ KkT ∫ T
0 ∫ T2 0 ... ∫ Tk 0 ‖W ′ −W ′′‖Tk+1 I (Tk ≤ ... ≤ T2 ≤ T ) dTk+1...dT2
≤ 1 k! KkT ‖W ′ −W ′′‖T .
By substituting W ′′ = F (W ′), we have:
∞∑ k=1 ∥∥∥F (k+1) (W ′)− F (k) (W ′)∥∥∥ T = ∞∑ k=1 ∥∥∥F (k) (W ′′)− F (k) (W ′)∥∥∥ T
≤ ∞∑ k=1 1 k! KkT ‖W ′ −W ′′‖T
<∞.
Hence as k → ∞, F (k) (W ′) converges to a limit inW+T , which is a fixed point of F . The uniqueness of a fixed point follows from the above estimate, since if W ′ and W ′′ are fixed points then
‖W ′ −W ′′‖T = ∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥
T ≤ 1 k! KkT ‖W ′ −W ′′‖T ,
while one can take k arbitrarily large. This proves that the solution exists and is unique on t ∈ [0, T ]. Since T is arbitrary, we have existence and uniqueness of the solution on the time interval [0,∞).
C CONNECTION BETWEEN THE NEURAL NET AND ITS MF LIMIT: PROOFS FOR SECTION 3
C.1 PROOF OF THEOREM 3
We construct an auxiliary trajectory, which we call the particle ODEs:
∂ ∂t w̃3 (t, j2) = −ξ3 (t)EZ
[ ∂2L ( Y, ŷ ( X; W̃ (t) )) ϕ′3 ( H3 ( X; W̃ (t) )) ϕ2 ( H2 ( X, j2; W̃ (t) ))] ,
∂ ∂t w̃2 (t, j1, j2) = −ξ2 (t)EZ
[ ∆H2 ( Z, j2; W̃ (t) ) ϕ1 (〈w̃1 (t, j1) , X〉) ] ,
∂ ∂t w̃1 (t, j1) = −ξ1 (t)EZ 1 n2 n2∑ j2=1 ∆H2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)ϕ ′ 1 (〈w̃1 (t, j1) , X〉)X , in which j1 = 1, ..., n1, j2 = 1, ..., n2, W̃ (t) = (w̃1 (t, ·) , w̃2 (t, ·, ·) , w̃3 (t, ·)), and t ∈ R≥0. We specify the initialization W̃ (0): w̃1 (0, j1) = w01 (C1 (j1)), w̃2 (0, j1, j2) = w 0 2 (C1 (j1) , C2 (j2)) and w̃3 (0, j3) = w03 (C2 (j2)). That is, it shares the same initialization with the neural network one W (0), and hence is coupled with the neural network and the MF ODEs. Roughly speaking, the particle ODEs are continuous-time trajectories of finitely many neurons, averaged over the data distribution. We note that W̃ (t) is random for all t ∈ R≥0 due to the randomness of {Ci (ji)}i=1,2.
The existence and uniqueness of the solution to the particle ODEs follows from the same proof as in Theorem 1, which we shall not repeat here. We equip W̃ (t) with the norm
9W̃9T = max {
max j1≤n1, j2≤n2 sup t≤T |w̃2 (t, j1, j2)| , max j2≤n2 sup t≤T |w̃3 (t, j2)|
} .
One can also define the measures DT ( W, W̃ ) and DT ( W̃ ,W ) similar to Eq. (2):
DT ( W, W̃ ) = sup { |w1 (t, C1 (j1))− w̃1 (t, C1 (j1))| , |w2 (t, C1 (j1) , C2 (j2))− w̃2 (t, C1 (j1) , C2 (j2))| ,
|w3 (t, C2 (j2))− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } ,
DT ( W̃ ,W ) = sup { |w1 (bt/ c , j1)− w̃1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w̃2 (t, C1 (j1) , C2 (j2))| ,
|w3 (bt/ c , j2)− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } .
We have the following results:
Theorem 14. Under the same setting as Theorem 3, for any δ > 0, with probability at least 1− δ,
DT ( W, W̃ ) ≤ 1√
nmin log1/2
( 3 (T + 1)n2max
δ + e
) eKT ,
in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) .
Theorem 15. Under the same setting as Theorem 3, for any δ > 0 and ≤ 1, with probability at least 1− δ,
DT ( W̃ ,W ) ≤ √ log ( 2n1n2 δ + e ) eKT ,
in which KT = K ( 1 + TK ) .
Proof of Theorem 3. Using the fact DT (W,W) ≤ DT ( W, W̃ ) + DT ( W̃ ,W ) ,
the thesis is immediate from Theorems 14 and 15.
C.2 PROOF OF THEOREMS 14 AND 15
Proof of Theorem 14. In the following, let Kt denote an generic positive constant that may change from line to line and takes the form
Kt = K ( 1 + tK ) ,
such that Kt ≥ 1 and Kt ≤ KT for all t ≤ T . We first note that at initialization, D0 ( W, W̃ ) = 0.
Since 9W90 ≤ K, 9W9T ≤ KT by Lemma 11. Furthermore it is easy to see that 9W̃90 ≤ 9W90 ≤ K almost surely. By the same argument as in Lemma 11, 9W̃9T ≤ KT almost surely. We shall use all above bounds repeatedly in the proof. We decompose the proof into several steps.
Step 1 - Main proof. Let us define, for brevity
q3 (t, x) = H3
( x; W̃ (t) ) −H3 (x;W (t)) ,
q2 (t, x, j2, c2) = H2 ( x, j2; W̃ (t) ) −H2 (x, c2;W (t)) ,
q∆ (t, z, j1, j2, c1, c2) = ∆ H 2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)−∆H2 (z, c2;W (t))w2 (t, c1, c2) .
Consider t ≥ 0. We first bound the difference in the updates between W and W̃ . Let us start with w3 and w̃3. By Assumption 1, we have:∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ KEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] . Similarly, for w2 and w̃2,∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2))
∣∣∣∣ ≤ KEZ
[∣∣∣∆H2 (Z, j2; W̃ (t))−∆H2 (Z,C2 (j2) ;W (t))∣∣∣] +K |w3 (t, C2 (j2))| |w̃1 (t, j1)− w1 (t, C1 (j1))| ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|]
+Kt (|w̃1 (t, j1)− w1 (t, C1 (j1))|+ |w̃3 (t, j2)− w3 (t, C2 (j2))|) ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] +KtDt ( W, W̃ ) ,
and for w1 and w̃1, by Lemma 12,∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣
≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣
+ EC2 [∣∣∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1 (j1) , C2)|] |w̃1 (t, j1)− w1 (t, C1 (j1))|
≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣
+KtDt ( W, W̃ ) .
To further the bounding, we now make the following two claims:
• Claim 1: For any ξ > 0,
max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ Kt+ξξ,
and similarly,
max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t+ ξ, j2)− ∂∂tw̃3 (t, j2) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t+ ξ, j1, j2)− ∂∂tw̃2 (t, j1, j2) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t+ ξ, j1)− ∂∂tw̃1 (t, j1) ∣∣∣∣ ≤ Kt+ξξ.
• Claim 2: For any γ1, γ2, γ3 > 0 and t ≥ 0,
max { max j2≤n2 EZ [|q2 (t,X, j2, C2 (j2))|] , EZ [|q3 (t,X)|] ,
max j1≤n1
EZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ }
≥ Kt ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 ) ,
with probability at most
n1 γ1 exp
( −n2γ 2 1
Kt ) + n2 γ2 exp ( −n1γ 2 2 Kt ) + 1 γ3 exp ( −n2γ 2 3 Kt ) .
Combining these claims with the previous bounds, taking a union bound over t ∈ {0, ξ, 2ξ, ..., bT/ξc ξ} for some ξ ∈ (0, 1), we obtain that
max { max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ,
max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ }
≤ KT ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) , ∀t ∈ [0, T ] ,
with probability at least
1− T + 1 ξ [ n1 γ1 exp ( −n2γ 2 1 KT ) + n2 γ2 exp ( −n1γ 2 2 KT ) + 1 γ3 exp ( −n2γ 2 3 KT )] .
The above event in turn implies Dt ( W, W̃ ) ≤ KT ∫ t 0 ( Ds ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) ds,
and hence by Gronwall’s lemma and the fact D0 ( W, W̃ ) = 0, we get
DT ( W, W̃ ) ≤ (γ1 + γ2 + γ3 + ξ) eKT .
The theorem then follows from the choice
ξ = 1
√ nmax , γ2 = KT√ n1
log1/2 (
3 (T + 1)n2max δ + e
) , γ1 = γ3 =
KT√ n2
log1/2 (
3 (T + 1)n2max δ + e
) .
We are left with proving the claims.
Step 2 - Proof of Claim 1. We have from Assumption 1, ess-sup |w3 (t+ ξ, C2)− w3 (t, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw3 (s, C2) ∣∣∣∣ ds ≤ Kξ,
ess-sup |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw2 (s, C1, C2) ∣∣∣∣ ds ≤ K
∫ t+ξ t ess-sup |w3 (s, C2)| ds
≤ Kt+ξξ, ess-sup |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw1 (s, C1) ∣∣∣∣ ds ≤ K
∫ t+ξ t ess-sup |w3 (s, C2)w2 (s, C1, C2)| ds
≤ Kt+ξξ.
By Lemma 12, we then obtain that
ess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ, EZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] ≤ Kt+ξξ,
ess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣] ≤ Kt+ξξ.
Using these estimates, we thus have, by Assumption 1,
max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣
≤ Kt+ξξ +KEZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] +Kess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣
≤ Kt+ξξ +Kess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣]
+Kess-sup |w3 (t, C2)| |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ,
max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣
≤ Kt+ξξ +Kess-supEZ [ EC2 [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1, C2)|]] +Kess-supEC2 [|w3 (t, C2)| |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)|] +Kess-supEC2 [|w3 (t, C2)w2 (t, C1, C2)|] |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ.
The proof of the rest of the claim is similar.
Step 3 - Proof of Claim 2. We recall the definitions of q∆, q2 and q3. Let us decompose them as follows. We start with q2:
|q2 (t, x, | 1. What is the main contribution of the paper regarding the behavior of a 3-layer fully connected network?
2. What are the strengths of the paper in terms of mathematical analysis and complementing previous works?
3. What are the weaknesses of the paper regarding its limitations in understanding the behavior of deep neural networks and its similarity to another arXiv paper?
4. How does the reviewer assess the proof and notations used in the paper?
5. What is the significance of the mean field regime and long-term analysis in the context of training deep neural networks? | Review | Review
This paper studies the behavior of a 3-layer fully connected network when the width of the network is large. The authors define a mean field regime and prove that the behavior of the network under stochastic gradient descent converges to this mean field regime (for any finite time horizon). This result complements nicely previous works like (Nguyen 2019) that contain an informal derivation of the mean field regime. This transient regime is complemented by a long-term analysis under quite restrictive assumptions (which imply essentially than the mean field regime always converge to the minimizer of the loss function).
I did not check all details of the proof but the approach seems mathematically sound. Once the model is defined, the proof for the finite regime is relatively classical: it relies on Martingale concentration plus Gronwall's lemma. Yet, as always, the devil being in the details and defining the right model and using the right notations is a difficult task. The result of the stationnary regime seems also reasonable but I must admit that the proof of the infinite horizon is not really clear to me. I would have appreciated more pedagogical effort from the authors.
To summarize, the paper seems a nice theoretical contribution. Yet, to me one thing that this paper is missing is an explanation or illustration of how useful is their result to understand the behavior of deep neural networks.
That being said, one major concern that I have about the paper is the link with https://arxiv.org/pdf/2001.11443.pdf The arXiv paper considers a very similar model (but more general as it considers L layers instead of 3). It uses almost the exact same notations and the same structure (overall paper and proofs). I think that the authors should clarify the link between the two papers. Also, if I can admit that the present paper is a resubmission of the arXiv paper, I do not understand why does the current paper focus on 3 layers and not the more general model of the arXiv paper. |
ICLR | Title
Global Convergence of Three-layer Neural Networks in the Mean Field Regime
Abstract
In the mean field regime, neural networks are appropriately scaled so that as the width tends to infinity, the learning dynamics tends to a nonlinear and nontrivial dynamical limit, known as the mean field limit. This lends a way to study large-width neural networks via analyzing the mean field limit. Recent works have successfully applied such analysis to two-layer networks and provided global convergence guarantees. The extension to multilayer ones however has been a highly challenging puzzle, and little is known about the optimization efficiency in the mean field regime when there are more than two layers. In this work, we prove a global convergence result for unregularized feedforward three-layer networks in the mean field regime. We first develop a rigorous framework to establish the mean field limit of three-layer networks under stochastic gradient descent training. To that end, we propose the idea of a neuronal embedding, which comprises of a fixed probability space that encapsulates neural networks of arbitrary sizes. The identified mean field limit is then used to prove a global convergence guarantee under suitable regularity and convergence mode assumptions, which – unlike previous works on two-layer networks – does not rely critically on convexity. Underlying the result is a universal approximation property, natural of neural networks, which importantly is shown to hold at any finite training time (not necessarily at convergence) via an algebraic topology argument.
1 INTRODUCTION
Interests in the theoretical understanding of the training of neural networks have led to the recent discovery of a new operating regime: the neural network and its learning rates are scaled appropriately, such that as the width tends to infinity, the network admits a limiting learning dynamics in which all parameters evolve nonlinearly with time1. This is known as the mean field (MF) limit (Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019)). The four works Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018) led the first wave of efforts in 2018 and analyzed two-layer neural networks. They established a connection between the network under training and its MF limit. They then used the MF limit to prove that two-layer networks could be trained to find (near) global optima using variants of gradient descent, despite non-convexity (Mei et al. (2018); Chizat & Bach (2018)). The MF limit identified by these works assumes the form of gradient flows in the measure space, which factors out the invariance from the action of a symmetry group on the model. Interestingly, by lifting to the measure space, with a convex loss function (e.g. squared loss), one obtains a limiting optimization problem that is convex (Bengio et al. (2006); Bach (2017)). The analyses of Mei et al. (2018); ∗This paper is a conference submission. We refer to the work Nguyen & Pham (2020) and its companion note Pham & Nguyen (2020) for generalizations as well as other conditions for global convergence in the case of multilayer neural networks. †Department of Mathematics, Stanford University. This work was done in parts while H. T. Pham was at the University of Cambridge. ‡The Voleon Group. This work was done while P.-M. Nguyen was at Stanford University. §The author ordering is randomized. 1This is to be contrasted with another major operating regime (the NTK regime) where parameters essentially do not evolve and the model behaves like a kernel method (Jacot et al. (2018); Chizat et al. (2019); Du et al. (2019); Allen-Zhu et al. (2019); Zou et al. (2018); Lee et al. (2019)).
Chizat & Bach (2018) utilize convexity, although the mechanisms to attain global convergence in these works are more sophisticated than the usual convex optimization setup in Euclidean spaces.
The extension to multilayer networks has enjoyed much less progresses. The works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) argued, heuristically or rigorously, for the existence of a MF limiting behavior under gradient descent training with different assumptions. In fact, it has been argued that the difficulty is not simply technical, but rather conceptual (Nguyen (2019)): for instance, the presence of intermediate layers exhibits multiple symmetry groups with intertwined actions on the model. Convergence to the global optimum of the model under gradientbased optimization has not been established when there are more than two layers.
In this work, we prove a global convergence guarantee for feedforward three-layer networks trained with unregularized stochastic gradient descent (SGD) in the MF regime. After an introduction of the three-layer setup and its MF limit in Section 2, our development proceeds in two main steps:
Step 1 (Theorem 3 in Section 3): We first develop a rigorous framework that describes the MF limit and establishes its connection with a large-width SGD-trained three-layer network. Here we propose the new idea of a neuronal embedding, which comprises of an appropriate non-evolving probability space that encapsulates neural networks of arbitrary sizes. This probability space is in general abstract and is constructed according to the (not necessarily i.i.d.) initialization scheme of the neural network. This idea addresses directly the intertwined action of multiple symmetry groups, which is the aforementioned conceptual obstacle (Nguyen (2019)), thereby covering setups that cannot be handled by formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) (see also Section 5 for a comparison). Our analysis follows the technique from Sznitman (1991); Mei et al. (2018) and gives a quantitative statement: in particular, the MF limit yields a good approximation of the neural network as long as n−1min log nmax 1 independent of the data dimension, where nmin and nmax are the minimum and maximum of the widths.
Step 2 (Theorem 8 in Section 4): We prove that the MF limit, given by our framework, converges to the global optimum under suitable regularity and convergence mode assumptions. Several elements of our proof are inspired by Chizat & Bach (2018); the technique in their work however does not generalize to our three-layer setup. Unlike previous two-layer analyses, we do not exploit convexity; instead we make use of a new element: a universal approximation property. The result turns out to be conceptually new: global convergence can be achieved even when the loss function is non-convex. An important crux of the proof is to show that the universal approximation property holds at any finite training time (but not necessarily at convergence, i.e. at infinite time, since the property may not realistically hold at convergence).
Together these two results imply a positive statement on the optimization efficiency of SGD-trained unregularized feedforward three-layer networks (Corollary 10). Our results can be extended to the general multilayer case – with new ideas on top and significantly more technical works – or used to obtain new global convergence guarantees in the two-layer case (Nguyen & Pham (2020); Pham & Nguyen (2020)). We choose to keep the current paper concise with the three-layer case being a prototypical setup that conveys several of the basic ideas. Complete proofs are presented in appendices.
Notations. K denotes a generic constant that may change from line to line. |·| denotes the absolute value for a scalar and the Euclidean norm for a vector. For an integer n, we let [n] = {1, ..., n}.
2 THREE-LAYER NEURAL NETWORKS AND THE MEAN FIELD LIMIT
2.1 THREE-LAYER NEURAL NETWORK
We consider the following three-layer network at time k ∈ N≥0 that takes as input x ∈ Rd:
ŷ (x;W (k)) = ϕ3 (H3 (x;W (k))) , (1)
H3 (x;W (k)) = 1
n2 n2∑ j2=1 w3 (k, j2)ϕ2 (H2 (x, j2;W (k))) ,
H2 (x, j2;W (k)) = 1
n1 n1∑ j1=1 w2 (k, j1, j2)ϕ1 (〈w1 (k, j1) , x〉) ,
in which W (k) = (w1 (k, ·) ,w2 (k, ·, ·) ,w3 (k, ·)) consists of the weights2 w1 (k, j1) ∈ Rd, w2 (k, j1, j2) ∈ R and w3 (k, j2) ∈ R. Here ϕ1 : R → R, ϕ2 : R → R and ϕ3 : R → R are the activation functions, and the network has widths {n1, n2}. We train the network with SGD w.r.t. the loss L : R × R → R≥0. We assume that at each time k, we draw independently a fresh sample z (k) = (x (k) , y (k)) ∈ Rd ×R from a training distribution P . Given an initialization W (0), we update W (k) according to
w3 (k + 1, j2) = w3 (k, j2)− ξ3 (k ) Grad3 (z (k) , j2;W (k)) , w2 (k + 1, j1, j2) = w2 (k, j1, j2)− ξ2 (k ) Grad2 (z (k) , j1, j2;W (k)) ,
w1 (k + 1, j1) = w1 (k, j1)− ξ1 (k ) Grad1 (z (k) , j1;W (k)) ,
in which j1 = 1, ..., n1, j2 = 1, ..., n2, ∈ R>0 is the learning rate, ξi : R≥0 7→ R≥0 is the learning rate schedule for wi, and for z = (x, y), we define
Grad3 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))ϕ2 (H2 (x, j2;W (k))) , Grad2 (z, j1, j2;W (k)) = ∆ H 2 (z, j2;W (k))ϕ1 (〈w1 (k, j1) , x〉) ,
Grad1 (z, j1;W (k)) =
( 1
n2 n2∑ j2=1 ∆H2 (z, j2;W (k))w2 (k, j1, j2) ) ϕ′1 (〈w1 (k, j1) , x〉)x,
∆H2 (z, j2;W (k)) = ∂2L (y, ŷ (x;W (k)))ϕ′3 (H3 (x;W (k)))w3 (k, j2)ϕ′2 (H2 (x, j2;W (k))) .
We note that this setup follows the same scaling w.r.t. n1 and n2, which is applied to both the forward pass and the learning rates in the backward pass, as Nguyen (2019).
2.2 MEAN FIELD LIMIT
The MF limit is a continuous-time infinite-width analog of the neural network under training. To describe it, we first introduce the concept of a neuronal ensemble. Given a product probability space (Ω,F , P ) = (Ω1 × Ω2,F1 ×F1, P1 × P2), we independently sample Ci ∼ Pi, i = 1, 2. In the following, we use ECi to denote the expectation w.r.t. the random variable Ci ∼ Pi and ci to denote an arbitrary point ci ∈ Ωi. The space (Ω,F , P ) is referred to as a neuronal ensemble. Given a neuronal ensemble (Ω,F , P ), the MF limit is described by a time-evolving system with state/parameter W (t), where the time t ∈ R≥0 and W (t) = (w1 (t, ·) , w2 (t, ·, ·) , w3 (t, ·)) with w1 : R≥0 × Ω1 → Rd, w2 : R≥0 × Ω1 × Ω2 → R and w3 : R≥0 × Ω2 → R. It entails the quantities:
ŷ (x;W (t)) = ϕ3 (H3 (x;W (t))) ,
H3 (x;W (t)) = EC2 [w3 (t, C2)ϕ2 (H2 (x,C2;W (t)))] , H2 (x, c2;W (t)) = EC1 [w2 (t, C1, c2)ϕ1 (〈w1 (t, C1) , x〉)] .
Here for each t ∈ R≥0, w1 (t, ·) is (Ω1,F1)-measurable, and similar for w2 (t, ·, ·), w3 (t, ·). The MF limit evolves according to a continuous-time dynamics, described by a system of ODEs, which we refer to as the MF ODEs. Specifically, given an initialization W (0) = (w1 (0, ·) , w2 (0, ·, ·) , w3 (0, ·)), the dynamics solves:
∂tw3 (t, c2) = −ξ3 (t) ∆3 (c2;W (t)) , ∂tw2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W (t)) ,
∂tw1 (t, c1) = −ξ1 (t) ∆1 (c1;W (t)) .
Here c1 ∈ Ω1, c2 ∈ Ω2, EZ denotes the expectation w.r.t. the data Z = (X,Y ) ∼ P , and for z = (x, y), we define
∆3 (c2;W (t)) = EZ [∂2L (Y, ŷ (X;W (t)))ϕ′3 (H3 (X;W (t)))ϕ2 (H2 (X, c2;W (t)))] , 2To absorb first layer’s bias term to w1, we assume the input x to have 1 appended to the last entry.
∆2 (c1, c2;W (t)) = EZ [ ∆H2 (Z, c2;W (t))ϕ1 (〈w1 (t, c1) , X〉) ] ,
∆1 (c1;W (t)) = EZ [ EC2 [ ∆H2 (Z,C2;W (t))w2 (t, c1, C2) ] ϕ′1 (〈w1 (t, c1) , X〉)X ] ,
∆H2 (z, c2;W (t)) = ∂2L (y, ŷ (x;W (t)))ϕ′3 (H3 (x;W (t)))w3 (t, c2)ϕ′2 (H2 (x, c2;W (t))) .
In Appendix B, we show well-posedness of MF ODEs under the following regularity conditions.
Assumption 1 (Regularity). We assume that ϕ1 and ϕ2 are K-bounded, ϕ′1, ϕ′2 and ϕ′3 are Kbounded and K-Lipschitz, ϕ′2 and ϕ ′ 3 are non-zero everywhere, ∂2L (·, ·) is K-Lipschitz in the second variable and K-bounded, and |X| ≤ K with probability 1. Furthermore ξ1, ξ2 and ξ3 are K-bounded and K-Lipschitz.
Theorem 1. Under Assumption 1, given any neuronal ensemble and an initialization W (0) such that3 ess-sup |w2 (0, C1, C2)| , ess-sup |w3 (0, C2)| ≤ K, there exists a unique solution W to the MF ODEs on t ∈ [0,∞).
An example of a suitable setup is ϕ1 = ϕ2 = tanh, ϕ3 is the identity, L is the Huber loss, although a non-convex sufficiently smooth loss function suffices. In fact, all of our developments can be easily modified to treat the squared loss with an additional assumption |Y | ≤ K with probability 1. So far, given an arbitrary neuronal ensemble (Ω,F , P ), for each initialization W (0), we have defined a MF limit W (t). The connection with the neural network’s dynamics W (k) is established in the next section.
3 CONNECTION BETWEEN NEURAL NETWORK AND ITS MEAN FIELD LIMIT
3.1 NEURONAL EMBEDDING AND THE COUPLING PROCEDURE
To formalize a connection between the neural network and its MF limit, we consider their initializations. In practical scenarios, to set the initial parameters W (0) of the neural network, one typically randomizes W (0) according to some distributional law ρ. We note that since the neural network is defined w.r.t. a set of finite integers {n1, n2}, so is ρ. We consider a family Init of initialization laws, each of which is indexed by the set {n1, n2}:
Init = {ρ : ρ is the initialization law of a neural network of size {n1, n2} , n1, n2 ∈ N>0}.
This is helpful when one is to take a limit that sends n1, n2 → ∞, in which case the size of this family |Init| is infinite. More generally we allow |Init| <∞ (for example, Init contains a single law ρ of a network of size {n1, n2} and hence |Init| = 1). We make the following crucial definition. Definition 2. Given a family of initialization laws Init, we call (Ω,F , P, { w0i } i=1,2,3
) a neuronal embedding of Init if the following holds:
1. (Ω,F , P ) = (Ω1 × Ω2,F1 ×F2, P1 × P2) a product measurable space. As a reminder, we call it a neuronal ensemble.
2. The deterministic functions w01 : Ω1 → Rd, w02 : Ω1 × Ω2 → R and w03 : Ω2 → R are such that, for each index {n1, n2} of Init and the law ρ of this index, if — with an abuse of notations — we independently sample {Ci (ji)}ji∈[ni] ∼ Pi i.i.d. for each i = 1, 2, then
Law ( w01 (C1 (j1)) , w 0 2 (C1(j1), C2 (j2)) , w 0 3 (C2(j2)) , ji ∈ [ni] , i = 1, 2 ) = ρ.
To proceed, given Init and {n1, n2} in its index set, we perform the following coupling procedure:
1. Let (Ω,F , P, { w0i } i=1,2,3 ) be a neuronal embedding of Init.
2. We form the MF limit W (t) (for t ∈ R≥0) associated with the neuronal ensemble (Ω,F , P ) by setting the initialization W (0) to w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·) and running the MF ODEs described in Section 2.2.
3We recall the definition of ess-sup in Appendix A.
3. We independently sample Ci (ji) ∼ Pi for i = 1, 2 and ji = 1, ..., ni. We then form the neural network initialization W (0) with w1 (0, j1) = w01 (C1 (j1)), w2 (0, j1, j2) = w02 (C1 (j1) , C2 (j2)) and w3 (0, j2) = w 0 3 (C2 (j2)) for j1 ∈ [n1], j2 ∈ [n2]. We obtain
the network’s trajectory W (k) for k ∈ N≥0 as in Section 2.1, with the data z (k) generated independently of {Ci (ji)}i=1,2 and hence W (0).
We can then define a measure of closeness between W (bt/ c) and W (t) for t ∈ [0, T ]: DT (W,W) = sup { |w1 (bt/ c , j1)− w1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w2 (t, C1 (j1) , C2 (j2))| , |w3 (bt/ c , j2)− w3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } . (2)
Note that W (t) is a deterministic trajectory independent of {n1, n2}, whereas W (k) is random for all k ∈ N≥0 due to the randomness of {Ci (ji)}i=1,2 and the generation of the training data z (k). Similarly DT (W,W) is a random quantity.
The idea of the coupling procedure is closely related to the coupling argument in Sznitman (1991); Mei et al. (2018). Here, instead of playing the role of a proof technique, the coupling serves as a vehicle to establish the connection betweenW and W on the basis of the neuronal embedding. This connection is shown in Theorem 3 below, which gives an upper bound on DT (W,W).
We note that the coupling procedure can be carried out to provide a connection between W and W as long as there exists a neuronal embedding for Init. Later in Section 4.1, we show that for a common initialization scheme (in particular, i.i.d. initialization) for Init, there exists a neuronal embedding. Theorem 3 applies to, but is not restricted to, this initialization scheme.
3.2 MAIN RESULT: APPROXIMATION BY THE MF LIMIT
Assumption 2 (Initialization of second and third layers). We assume that ess-sup ∣∣w02 (C1, C2)∣∣,
ess-sup ∣∣w03 (C2)∣∣ ≤ K, where w02 and w03 are as described in Definition 2.
Theorem 3. Given a family Init of initialization laws and a tuple {n1, n2} that is in the index set of Init, perform the coupling procedure as described in Section 3.1. Fix a terminal time T ∈ N≥0. Under Assumptions 1 and 2, for ≤ 1, we have with probability at least 1− 2δ,
DT (W,W) ≤ eKT (
1 √ nmin + √
) log1/2 ( 3 (T + 1)n2max
δ + e
) ≡ errδ,T ( , n1, n2) ,
in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) .
The theorem gives a connection between W (bt/ c), which is defined upon finite widths n1 and n2, and the MF limit W (t), whose description is independent of n1 and n2. It lends a way to extract properties of the neural network in the large-width regime. Corollary 4. Under the same setting as Theorem 3, consider any test function ψ : R × R → R which is K-Lipschitz in the second variable uniformly in the first variable (an example of ψ is the loss L). For any δ > 0, with probability at least 1− 3δ,
sup t≤T |EZ [ψ (Y, ŷ (X;W (bt/ c)))]− EZ [ψ (Y, ŷ (X;W (t)))]| ≤ eKT errδ,T ( , n1, n2) .
These bounds hold for any n1 and n2, similar to Mei et al. (2018); Araújo et al. (2019), in contrast with non-quantitative results in Chizat & Bach (2018); Sirignano & Spiliopoulos (2019). These bounds suggest that n1 and n2 can be chosen independent of the data dimension d. This agrees with the experiments in Nguyen (2019), which found width ≈ 1000 to be typically sufficient to observe MF behaviors in networks trained with real-life high-dimensional data.
We observe that the MF trajectory W (t) is defined as per the choice of the neuronal embedding (Ω,F , P, { w0i } i=1,2,3
), which may not be unique. On the other hand, the neural network’s trajectory W (k) depends on the randomization of the initial parameters W (0) according to an initialization law from the family Init (as well as the data z (k)) and hence is independent of this choice. Another corollary of Theorem 3 is that given the same family Init, the law of the MF trajectory is insensitive to the choice of the neuronal embedding of Init.
Corollary 5. Consider a family Init of initialization laws, indexed by a set of tuples {m1,m2} that contains a sequence of indices {m1 (m) ,m2 (m) : m ∈ N} in which as m → ∞, min {m1 (m) ,m2 (m)}−1 log (max {m1 (m) ,m2 (m)}) → 0. Let W (t) and Ŵ (t) be two MF trajectories associated with two choices of neuronal embeddings of Init, (Ω,F , P, { w0i } i=1,2,3 ) and
(Ω̂, F̂ , P̂ , { ŵ0i } i=1,2,3
). The following statement holds for any T ≥ 0 and any two positive integers n1 and n2: if we independently sample Ci (ji) ∼ Pi and Ĉi (ji) ∼ P̂i for ji ∈ [ni], i = 1, 2, then Law (W (n1, n2, T )) = Law(Ŵ (n1, n2, T )), where we define W (n1, n2, T ) as the below collection w.r.t. W (t), and similarly define Ŵ (n1, n2, T ) w.r.t. Ŵ (t):
W (n1, n2, T ) = { w1 (t, C1 (j1)) , w2 (t, C1 (j1) , C2 (j2)) , w3 (t, C2 (j2)) :
j1 ∈ [n1] , j2 ∈ [n2] , t ∈ [0, T ] } .
The proofs are deferred to Appendix C.
4 CONVERGENCE TO GLOBAL OPTIMA
In this section, we prove a global convergence guarantee for three-layer neural networks via the MF limit. We consider a common class of initialization: i.i.d. initialization.
4.1 I.I.D. INITIALIZATION
Definition 6. An initialization law ρ for a neural network of size {n1, n2} is called ( ρ1, ρ2, ρ3 ) - i.i.d. initialization (or i.i.d. initialization, for brevity), where ρ1, ρ2 and ρ3 are probability measures over Rd, R and R respectively, if {w1 (0, j1)}j1∈[n1] are generated i.i.d. according to ρ
1, {w2 (0, j1, j2)}j1∈[n1], j2∈[n2] are generated i.i.d. according to ρ
2 and {w3 (0, j2)}j2∈[n2] are generated i.i.d. according to ρ3, and w1, w2 and w3 are independent.
Observe that given ( ρ1, ρ2, ρ3 ) , one can build a family Init of i.i.d. initialization laws that contains any index set {n1, n2}. Furthermore i.i.d. initializations are supported by our framework, as stated in the following proposition and proven in Appendix D.
Proposition 7. There exists a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) for any family Init of
initialization laws, which are ( ρ1, ρ2, ρ3 ) -i.i.d.
4.2 MAIN RESULT: GLOBAL CONVERGENCE
To measure the learning quality, we consider the loss averaged over the data Z ∼ P:
L (V ) = EZ [L (Y, ŷ (X;V ))] ,
where V = (v1, v2, v3) is a set of three measurable functions v1 : Ω1 → Rd, v2 : Ω1 × Ω2 → R, v3 : Ω2 → R.
Assumption 3. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of the ( ρ1, ρ2, ρ3 ) -i.i.d. initialization, and the associated MF limit with initialization W (0) such that w1 (0, ·) = w01 (·), w2 (0, ·, ·) = w02 (·, ·) and w3 (0, ·) = w03 (·). Assume:
1. Support: The support of ρ1 is Rd.
2. Convergence mode: There exist limits w̄1, w̄2 and w̄3 such that as t→∞,
E [(1 + |w̄3(C2)|) |w̄3(C2)| |w̄2(C1, C2)| |w1(t, C1)− w̄1(C1)|]→ 0, (3) E [(1 + |w̄3(C2)|) |w̄3(C2)| |w2(t, C1, C2)− w̄2(C1, C2)|]→ 0, (4)
E [(1 + |w̄3(C2)|) |w3(t, C2)− w̄3(C2)|]→ 0, (5) ess-supEC2 [|∂tw2 (t, C1, C2)|]→ 0. (6)
3. Universal approximation: { ϕ1 (〈u, ·〉) : u ∈ Rd } has dense span in L2 (PX) (the space
of square integrable functions w.r.t. PX the distribution of the input X).
Assumption 3 is inspired by the work Chizat & Bach (2018) on two-layer networks, with certain differences. Assumptions 3.1 and 3.3 are natural in neural network learning (Cybenko (1989); Chen & Chen (1995)), while we note Chizat & Bach (2018) does not utilize universal approximation. Similar to Chizat & Bach (2018), Assumption 3.2 is technical and does not seem removable. Note that this assumption specifies the mode of convergence and is not an assumption on the limits w̄1, w̄2 and w̄3. Specifically conditions (3)-(5) are similar to the convergence assumption in Chizat & Bach (2018). We differ from Chizat & Bach (2018) fundamentally in the essential supremum condition (6). On one hand, this condition helps avoid the Morse-Sard type condition in Chizat & Bach (2018), which is difficult to verify in general and not simple to generalize to the three-layer case. On the other hand, it turns out to be a natural assumption to make, in light of Remark 9 below.
We now state the main result of the section. The proof is in Appendix D. Theorem 8. Consider a neuronal embedding ( Ω,F , P, { w0i } i=1,2,3 ) of ( ρ1, ρ2, ρ3 ) -i.i.d. initialization. Consider the MF limit corresponding to the network (1), such that they are coupled together by the coupling procedure in Section 3.1, under Assumptions 1, 2 and 3. For simplicity, assume ξ1 (·) = ξ2 (·) = 1. Further assume either:
• (untrained third layer) ξ3 (·) = 0 and w03 (C2) 6= 0 with a positive probability, or • (trained third layer) ξ3 (·) = 1 and L ( w01, w 0 2, w 0 3 ) < EZ [L (Y, ϕ3 (0))].
Then the following hold:
• Case 1 (convex loss): If L is convex in the second variable, then
lim t→∞ L (W (t)) = inf V L (V ) = inf ỹ: Rd→R
EZ [L (Y, ỹ (X))] .
• Case 2 (generic non-negative loss): Suppose that ∂2L (y, ŷ) = 0 implies L (y, ŷ) = 0. If y = y(x) is a function of x, then L (W (t))→ 0 as t→∞.
Remarkably here the theorem allows for non-convex losses. A further inspection of the proof shows that no convexity-based property is used in Case 2 (see, for instance, the high-level proof sketch in Section 4.3); in Case 1, the key steps in the proof are the same, and the convexity of the loss function serves as a convenient technical assumption to handle the arbitrary extra randomness of Y conditional on X . We also remark that the same proof of global convergence should extend beyond the specific fully-connected architecture considered here. Similar to previous results on SGD-trained two-layer networks Mei et al. (2018); Chizat & Bach (2018), our current result in the three-layer case is non-quantitative. Remark 9. Interestingly there is a converse relation between global convergence and the essential supremum condition (6): under the same setting, global convergence is unattainable if condition (6) does not hold. A similar observation was made in Wojtowytsch (2020) for two-layer ReLU networks. A precise statement and its proof can be found in Appendix E.
The following result is straightforward from Theorem 8 and Corollary 4, establishing the optimization efficiency of the neural network with SGD. Corollary 10. Consider the neural network (1). Under the same setting as Theorem 8, in Case 1,
lim t→∞ lim n1,n2 lim →0 EZ [L (Y, ŷ (X;W (bt/ c)))] = inf f1,f2,f3 L (f1, f2, f3) = inf ỹ EZ [L (Y, ỹ (X))]
in probability, where the limit of the widths is such that min {n1, n2}−1 log (max {n1, n2}) → 0. In Case 2, the same holds with the right-hand side being 0.
4.3 HIGH-LEVEL IDEA OF THE PROOF
We give a high-level discussion of the proof. This is meant to provide intuitions and explain the technical crux, so our discussion may simplify and deviate from the actual proof.
Our first insight is to look at the second layer’s weight w2. At convergence time t = ∞, we expect to have zero movement and hence, denoting W (∞) = (w̄1, w̄2, w̄3):
∆2 (c1, c2;W (∞)) = EZ [ ∆H2 (Z, c2;W (∞))ϕ1 (〈w̄1 (c1) , X〉) ] = 0,
for P -almost every c1, c2. Suppose for the moment that we are allowed to make an additional (strong) assumption on the limit w̄1: supp (w̄1 (C1)) = Rd. It implies that the universal approximation property, described in Assumption 3, holds at t = ∞; more specifically, it implies {ϕ1 (〈w̄1 (c1) , ·〉) : c1 ∈ Ω1} has dense span in L2 (PX). This thus yields
EZ [ ∆H2 (Z, c2;W (∞)) ∣∣X = x] = 0, for P-almost every x. Recalling the definition of ∆H2 , one can then easily show that
EZ [∂2L (Y, ŷ (X;W (∞)))|X = x] = 0.
Global convergence follows immediately; for example, in Case 2 of Theorem 8, this is equivalent to that ∂2L (y (x) , ŷ (x;W (∞))) = 0 and hence L (y (x) , ŷ (x;W (∞))) = 0 for P-almost every x. In short, the gradient flow structure of the dynamics of w2 provides a seamless way to obtain global convergence. Furthermore there is no critical reliance on convexity.
However this plan of attack has a potential flaw in the strong assumption that supp (w̄1 (C1)) = Rd, i.e. the universal approximation property holds at convergence time. Indeed there are setups where it is desirable that supp (w̄1 (C1)) 6= Rd (Mei et al. (2018); Chizat (2019)); for instance, it is the case where the neural network is to learn some “sparse and spiky” solution, and hence the weight distribution at convergence time, if successfully trained, cannot have full support. On the other hand, one can entirely expect that if supp (w1 (0, C1)) = Rd initially at t = 0, then supp (w1 (t, C1)) = Rd at any finite t ≥ 0. The crux of our proof is to show the latter without assuming supp (w̄1 (C1)) = Rd.
This task is the more major technical step of the proof. To that end, we first show that there exists a mapping (t, u) 7→ M (t, u) that maps from (t, w1 (0, c1)) = (t, u) to w1 (t, c1) via a careful measurability argument. This argument rests on a scheme that exploits the symmetry in the network evolution. Furthermore the map M is shown to be continuous. The desired conclusion then follows from an algebraic topology argument that the map M preserves a homotopic structure through time.
5 DISCUSSION
The MF literature is fairly recent. A long line of works (Nitanda & Suzuki (2017); Mei et al. (2018); Chizat & Bach (2018); Rotskoff & Vanden-Eijnden (2018); Sirignano & Spiliopoulos (2018); Wei et al. (2019); Javanmard et al. (2019); Mei et al. (2019); Shevchenko & Mondelli (2019); Wojtowytsch (2020)) have focused mainly on two-layer neural networks, taking an interacting particle system approach to describe the MF limiting dynamics as Wasserstein gradient flows. The three works Nguyen (2019); Araújo et al. (2019); Sirignano & Spiliopoulos (2019) independently develop different formulations for the MF limit in multilayer neural networks, under different assumptions. These works take perspectives that are different from ours. In particular, while the central object in Nguyen (2019) is a new abstract representation of each individual neuron, our neuronal embedding idea instead takes a keen view on a whole ensemble of neurons. Likewise our idea is also distant from Araújo et al. (2019); Sirignano & Spiliopoulos (2019): the central objects in Araújo et al. (2019) are paths over the weights across layers; those in Sirignano & Spiliopoulos (2019) are time-dependent functions of the initialization, which are simplified upon i.i.d. initializations.
The result of our perspective is a neuronal embedding framework that allows one to describe the MF limit in a clean and rigorous manner. In particular, it avoids extra assumptions made in Araújo et al. (2019); Sirignano & Spiliopoulos (2019): unlike our work, Araújo et al. (2019) assumes untrained first and last layers and requires non-trivial technical tools; Sirignano & Spiliopoulos (2019) takes an unnatural sequential limit n1 → ∞ before n2 → ∞ and proves a non-quantitative result, unlike Theorem 3 which only requires sufficiently large min {n1, n2}. We note that Theorem 3 can be extended to general multilayer networks using the neuronal embedding idea. The advantages of our framework come from the fact that while MF formulations in Araújo et al. (2019); Sirignano & Spiliopoulos (2019) are specific to and exploit i.i.d. initializations, our formulation does not. Remarkably as shown in Araújo et al. (2019), when there are more than three layers and no biases,
i.i.d. initializations lead to a certain simplifying effect on the MF limit. On the other hand, our framework supports non-i.i.d. initializations which avoid the simplifying effect, as long as there exist suitable neuronal embeddings (Nguyen & Pham (2020)). Although our global convergence result in Theorem 8 is proven in the context of i.i.d. initializations for three-layer networks, in the general multilayer case, it turns out that the use of a special type of non-i.i.d. initialization allows one to prove a global convergence guarantee (Pham & Nguyen (2020)).
In this aspect, our framework follows closely the spirit of the work Nguyen (2019), whose MF formulation is also not specific to i.i.d. initializations. Yet though similar in the spirit, Nguyen (2019) develops a heuristic formalism and does not prove global convergence.
Global convergence in the two-layer case with convex losses has enjoyed multiple efforts with a lot of new and interesting results (Mei et al. (2018); Chizat & Bach (2018); Javanmard et al. (2019); Rotskoff et al. (2019); Wei et al. (2019)). Our work is the first to establish a global convergence guarantee for SGD-trained three-layer networks in the MF regime. Our proof sends a new message that the crucial factor is not necessarily convexity, but rather that the whole learning trajectory maintains the universal approximation property of the function class represented by the first layer’s neurons, together with the gradient flow structure of the second layer’s weights. As a remark, our approach can also be applied to prove a similar global convergence guarantee for two-layer networks, removing the convex loss assumption in previous works (Nguyen & Pham (2020)). The recent work Lu et al. (2020) on a MF resnet model (a composition of many two-layer MF networks) and a recent update of Sirignano & Spiliopoulos (2019) essentially establish conditions of stationary points to be global optima. They however require strong assumptions on the support of the limit point. As explained in Section 4.3, we analyze the training dynamics without such assumption and in fact allow it to be violated.
Our global convergence result is non-quantitative. An important, highly challenging future direction is to develop a quantitative version of global convergence; previous works on two-layer networks Javanmard et al. (2019); Wei et al. (2019); Rotskoff et al. (2019); Chizat (2019) have done so under sophisticated modifications of the architecture and training algorithms.
Finally we remark that our insights here can be applied to prove similar global convergence guarantees and derive other sufficient conditions for global convergence of two-layer or multilayer networks (Nguyen & Pham (2020); Pham & Nguyen (2020)).
ACKNOWLEDGEMENT
H. T. Pham would like to thank Jan Vondrak for many helpful discussions and in particular for the shorter proof of Lemma 19. We would like to thank Andrea Montanari for the succinct description of the difficulty in extending the mean field formulation to the multilayer case, in that there are multiple symmetry group actions in a multilayer network.
A NOTATIONAL PRELIMINARIES
For a real-valued random variable Z defined on a probability space (Ω,F , P ), we recall
ess-supZ = inf {z ∈ R : P (Z > z) = 0} .
We also introduce some convenient definitions which we use throughout the appendices. For a set of neural network’s parameter W, we define
9W9T = max {
max j1≤n1, j2≤n2 sup t≤T |w2 (bt/ c , j1, j2)| , max j2≤n2 sup t≤T |w3 (bt/ c , j2)|
} .
Similarly for a set of MF parameters W , we define: 9W9T = max {
ess-sup sup t≤T |w2 (t, C1, C2)| , ess-sup sup t≤T |w3 (t, C2)|
} .
For two sets of neural network’s parameters W′,W′′, we define their distance: ‖W′ −W′′‖T = sup { |w′1 (bt/ c , j1)−w′′1 (bt/ c , j1)| , |w′2 (bt/ c , j1, j2)−w′′2 (bt/ c , j1, j2)| , |w′3 (bt/ c , j2)−w′′3 (bt/ c , j2)| : t ∈ [0, T ] , j1 ∈ [n1] , j2 ∈ [n2] } .
Similarly for two sets of MF parameters W ′,W ′′, we define their distance:
‖W ′ −W ′′‖T = ess-sup sup t∈[0,T ]
{ |w′1 (t, C1)− w′′1 (t, C1)| , |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| ,
|w′3 (t, C2)− w′′3 (t, C2)| } .
B EXISTENCE AND UNIQUENESS OF THE SOLUTION TO MF ODES
We first collect some a priori estimates.
Lemma 11. Under Assumption 1, consider a solution W to the MF ODEs with initialization W (0) such that 9W90 < ∞. If this solution exists, it satisfies the following a priori bounds, for any T ≥ 0:
ess-sup sup t≤T |w3 (t, C2)| ≤ 9W 90 +KT ≡ 9W 90 +K0,3 (T ) ,
ess-sup sup t≤T |w2 (t, C1, C2)| ≤ 9W 90 +KTK0,3 (T ) ≡ 9W 90 +K0,2 (T ) ,
and consequently, 9W9T ≤ 1 + max {K0,2 (T ) , K0,3 (T )} .
Proof. The bounds can be obtained easily by bounding the respective initializations and update quantities separately. In particular,
ess-sup sup t≤T |w3 (t, C2)| ≤ ess-sup |w3 (0, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw3 (t, C2) ∣∣∣∣ ≤ 9W 90 +KT,
ess-sup sup t≤T |w2 (t, C1, C2)| ≤ ess-sup |w2 (0, C1, C2)|+ T ess-sup sup t≤T ∣∣∣∣ ∂∂tw2 (t, C1, C2) ∣∣∣∣
≤ ess-sup |w2 (0, C1, C2)|+KT ess-sup sup t≤T |w3 (t, C2)|
≤ 9W 90 +KTK0,3 (T ) .
Inspired by the a priori bounds in Lemma 11, given an arbitrary terminal time T and the initialization W (0), let us consider:
• for a tuple (a, b) ∈ R2≥0, a space WT (a, b) of W ′ = (W ′ (t))t≤T = (w′1 (t, ·) , w′2 (t, ·, ·) , w′3 (t, ·))t≤T such that
ess-sup sup t≤T |w′3 (t, C2)| ≤ b, ess-sup sup t≤T |w′2 (t, C1, C2)| ≤ a,
where w′1 : R≥0 × Ω1 → Rd, w′2 : R≥0 × Ω1 × Ω2 7→ R, w′3 : R≥0 × Ω3 7→ R, • for a tuple (a, b) ∈ R2≥0 and W (0), a space W + T (a, b,W (0)) of W
′ ∈ WT (a, b) such that W ′ (0) = W (0) additionally (and hence every W ′ in this space shares the same initialization W (0)).
We equip the spaces with the metric ‖W ′ −W ′′‖T . It is easy to see that both spaces are complete. Note that Lemma 11 implies, under Assumption 1 and 9W90 < ∞, we have any MF solution W , if exists, is inWT (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T )). For the proof of Theorem 1, we work mainly with W+T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)), although several intermediate lemmas are proven in more generality for other uses.
Lemma 12. Under Assumption 1, for T ≥ 0, any W ′,W ′′ ∈ WT (a, b) and almost every z ∼ P:
ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ Ka,b, ess-sup sup
t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T ,
sup t≤T |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ Ka,b ‖W ′ −W ′′‖T ,
sup t≤T |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ Ka,b ‖W ′ −W ′′‖T ,
ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣ ≤ Ka,b ‖W ′ −W ′′‖T , where Ka,b ≥ 1 is a generic constant that grows polynomially with a and b.
Proof. The first bound is easy to see:
ess-sup sup t≤T ∣∣∆H2 (z, C2;W ′ (t))∣∣ ≤ ess-sup sup t≤T |w′3 (t, C2)| ≤ b.
We prove the second bound, invoking Assumption 1:
|H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K |w′2 (t, C1, C2)| |ϕ1 (〈w′1 (t, C1) , x〉)− ϕ1 (〈w′′1 (t, C1) , x〉)|
+K |w′2 (t, C1, C2)− w′′2 (t, C1, C2)| ≤ K (|w′2 (t, C1, C2)|+ 1) ‖W ′ −W ′′‖T ,
which yields by the fact W ′ ∈ WT (a, b):
ess-sup sup t≤T |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| ≤ K (a+ 1) ‖W ′ −W ′′‖T .
Consequently, we have:
|H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ≤ K |w′3 (t, C2)| |ϕ2 (H2 (x,C2;W ′ (t)))− ϕ2 (H2 (x,C2;W ′′ (t)))| +K |w′3 (t, C2)− w′′3 (t, C2)| ≤ K |w′3 (t, C2)| |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))|
+K ‖W ′ −W ′′‖T , |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))| ≤ K |ŷ (x;W ′ (t))− ŷ (x;W ′′ (t))|
≤ K |H3 (x;W ′ (t))−H3 (x;W ′′ (t))| ,
which then yield the third and fourth bounds by the fact W ′,W ′′ ∈ WT (a, b). Using these bounds, we obtain the last bound:∣∣∆H2 (z, C2;W ′ (t))−∆H2 (z, C2;W ′′ (t))∣∣
≤ K |w′3 (t, C2)| ( |∂2L (y, ŷ (x;W ′ (t)))− ∂2L (y, ŷ (x;W ′′ (t)))|
+ |H3 (x;W ′ (t))−H3 (x;W ′′ (t))|+ |H2 (x,C2;W ′ (t))−H2 (x,C2;W ′′ (t))| )
+K |w′3 (t, C2)− w′′3 (t, C2)| ,
from which the last bound follows.
To prove Theorem 1, for a given W (0), we define a mapping FW (0) that maps from W ′ = (w′1, w ′ 2, w ′ 3) ∈ WT (a, b) to FW (0) (W ′) = W̄ ′ = (w̄′1, w̄′2, w̄′3), defined by W̄ ′ (0) = W (0) and
∂ ∂t w̄′3 (t, c2) = −ξ3 (t) ∆3 (c2;W ′ (t)) ,
∂ ∂t w̄′2 (t, c1, c2) = −ξ2 (t) ∆2 (c1, c2;W ′ (t)) ,
∂ ∂t w̄′1 (t, c1) = −ξ1 (t) ∆1 (c1;W ′ (t)) .
Notice that the right-hand sides do not involve W̄ ′. Note that the MF ODEs’ solution, initialized at W (0), is a fixed point of this mapping.
We establish the following estimates for this mapping.
Lemma 13. Under Assumption 1, for T ≥ 0, any initialization W (0) and any W ′,W ′′ ∈ WT (a, b),
ess-sup sup s≤t |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t ,
ess-sup sup s≤t |∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t ,
ess-sup sup s≤t |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,b ‖W ′ −W ′′‖t ,
and consequently, if in addition W ′ (0) = W ′′ (0) (not necessarily equal W (0)), then
ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds,
ess-sup sup t≤T |w̄′2 (t, C1, C2)− w̄′′2 (t, C1, C2)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds,
ess-sup sup t≤T |w̄′1 (t, C1)− w̄′′1 (t, C1)| ≤ Ka,b ∫ T 0 ‖W ′ −W ′′‖s ds,
in which W̄ ′ = (w̄′1, w̄ ′ 2, w̄ ′ 3) = FW (0) (W ′), W̄ ′′ = (w̄′′1 , w̄ ′′ 2 , w̄ ′′ 3 ) = FW (0) (W ′′) and Ka,b ≥ 1 is a generic constant that grows polynomially with a and b.
Proof. From Assumption 1 and the fact W ′,W ′′ ∈ WT (a, b), we get:
|∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ≤ KEZ [|∂2L (Y, ŷ (X;W ′ (s)))− ∂2L (Y, ŷ (X;W ′′ (s)))|] +KEZ [|H3 (X;W ′ (s))−H3 (X;W ′′ (s))|] +KEZ [|H2 (X,C2;W ′ (s))−H2 (X,C2;W ′′ (s))|] ,
|∆2 (C1, C2;W ′ (s))−∆2 (C1, C2;W ′′ (s))| ≤ Ka,b |w′1 (s, C1)− w′′1 (s, C1)| +K ∣∣EZ [∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))]∣∣ , |∆1 (C1;W ′ (s))−∆1 (C1;W ′′ (s))| ≤ Ka,bEZ [∣∣∆H2 (Z,C2;W ′ (s))−∆H2 (Z,C2;W ′′ (s))∣∣]
+Ka,b |w′2 (s, C1, C2)− w′′2 (s, C1, C2)| +Ka,b |w′1 (s, C1)− w′′1 (s, C1)| ,
from which the first three estimates then follow, in light of Lemma 12. The last three estimates then follow from the fact that W̄ ′ (0) = W̄ ′′ (0) and Assumption 1; for instance,
ess-sup sup t≤T |w̄′3 (t, C2)− w̄′′3 (t, C2)| ≤ ∫ T 0 ess-sup ∣∣∣∣ ∂∂tw̄′3 (s, C2)− ∂∂tw̄′′3 (s, C2) ∣∣∣∣ ds ≤ K
∫ T 0 ess-sup |∆3 (C2;W ′ (s))−∆3 (C2;W ′′ (s))| ds.
We are now ready to prove Theorem 1.
Proof of Theorem 1. We will use a Picard-type iteration. To lighten notations:
W+T ≡ W + T (9W 90 +K0,2 (T ) ,9W 90 +K0,3 (T ) ,W (0)) , F ≡ FW (0).
Since 9W90 ≤ K by assumption, we have 9W 90 +K0,2 (T ) +K0,3 (T ) ≤ KT . Recall thatW+T is complete. For an arbitrary T > 0, consider W ′,W ′′ ∈ W+T . Lemma 13 yields:
‖F (W ′)− F (W ′′)‖T ≤ KT ∫ T
0
‖W ′ −W ′′‖s ds.
Note that F maps toW+T under Assumption 1 by the same argument as Lemma 11. Hence we are allowed to iterating this inequality and get, for an arbitrary T > 0,∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥
T ≤ KT ∫ T 0 ∥∥∥F (k−1) (W ′)− F (k−1) (W ′′)∥∥∥ T2 dT2
≤ K2T ∫ T
0 ∫ T2 0 ∥∥∥F (k−2) (W ′)− F (k−2) (W ′′)∥∥∥ T3 I (T2 ≤ T ) dT3dT2
... ≤ KkT ∫ T
0 ∫ T2 0 ... ∫ Tk 0 ‖W ′ −W ′′‖Tk+1 I (Tk ≤ ... ≤ T2 ≤ T ) dTk+1...dT2
≤ 1 k! KkT ‖W ′ −W ′′‖T .
By substituting W ′′ = F (W ′), we have:
∞∑ k=1 ∥∥∥F (k+1) (W ′)− F (k) (W ′)∥∥∥ T = ∞∑ k=1 ∥∥∥F (k) (W ′′)− F (k) (W ′)∥∥∥ T
≤ ∞∑ k=1 1 k! KkT ‖W ′ −W ′′‖T
<∞.
Hence as k → ∞, F (k) (W ′) converges to a limit inW+T , which is a fixed point of F . The uniqueness of a fixed point follows from the above estimate, since if W ′ and W ′′ are fixed points then
‖W ′ −W ′′‖T = ∥∥∥F (k) (W ′)− F (k) (W ′′)∥∥∥
T ≤ 1 k! KkT ‖W ′ −W ′′‖T ,
while one can take k arbitrarily large. This proves that the solution exists and is unique on t ∈ [0, T ]. Since T is arbitrary, we have existence and uniqueness of the solution on the time interval [0,∞).
C CONNECTION BETWEEN THE NEURAL NET AND ITS MF LIMIT: PROOFS FOR SECTION 3
C.1 PROOF OF THEOREM 3
We construct an auxiliary trajectory, which we call the particle ODEs:
∂ ∂t w̃3 (t, j2) = −ξ3 (t)EZ
[ ∂2L ( Y, ŷ ( X; W̃ (t) )) ϕ′3 ( H3 ( X; W̃ (t) )) ϕ2 ( H2 ( X, j2; W̃ (t) ))] ,
∂ ∂t w̃2 (t, j1, j2) = −ξ2 (t)EZ
[ ∆H2 ( Z, j2; W̃ (t) ) ϕ1 (〈w̃1 (t, j1) , X〉) ] ,
∂ ∂t w̃1 (t, j1) = −ξ1 (t)EZ 1 n2 n2∑ j2=1 ∆H2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)ϕ ′ 1 (〈w̃1 (t, j1) , X〉)X , in which j1 = 1, ..., n1, j2 = 1, ..., n2, W̃ (t) = (w̃1 (t, ·) , w̃2 (t, ·, ·) , w̃3 (t, ·)), and t ∈ R≥0. We specify the initialization W̃ (0): w̃1 (0, j1) = w01 (C1 (j1)), w̃2 (0, j1, j2) = w 0 2 (C1 (j1) , C2 (j2)) and w̃3 (0, j3) = w03 (C2 (j2)). That is, it shares the same initialization with the neural network one W (0), and hence is coupled with the neural network and the MF ODEs. Roughly speaking, the particle ODEs are continuous-time trajectories of finitely many neurons, averaged over the data distribution. We note that W̃ (t) is random for all t ∈ R≥0 due to the randomness of {Ci (ji)}i=1,2.
The existence and uniqueness of the solution to the particle ODEs follows from the same proof as in Theorem 1, which we shall not repeat here. We equip W̃ (t) with the norm
9W̃9T = max {
max j1≤n1, j2≤n2 sup t≤T |w̃2 (t, j1, j2)| , max j2≤n2 sup t≤T |w̃3 (t, j2)|
} .
One can also define the measures DT ( W, W̃ ) and DT ( W̃ ,W ) similar to Eq. (2):
DT ( W, W̃ ) = sup { |w1 (t, C1 (j1))− w̃1 (t, C1 (j1))| , |w2 (t, C1 (j1) , C2 (j2))− w̃2 (t, C1 (j1) , C2 (j2))| ,
|w3 (t, C2 (j2))− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } ,
DT ( W̃ ,W ) = sup { |w1 (bt/ c , j1)− w̃1 (t, C1 (j1))| , |w2 (bt/ c , j1, j2)− w̃2 (t, C1 (j1) , C2 (j2))| ,
|w3 (bt/ c , j2)− w̃3 (t, C2 (j2))| : t ≤ T, j1 ≤ n1, j2 ≤ n2 } .
We have the following results:
Theorem 14. Under the same setting as Theorem 3, for any δ > 0, with probability at least 1− δ,
DT ( W, W̃ ) ≤ 1√
nmin log1/2
( 3 (T + 1)n2max
δ + e
) eKT ,
in which nmin = min {n1, n2}, nmax = max {n1, n2}, and KT = K ( 1 + TK ) .
Theorem 15. Under the same setting as Theorem 3, for any δ > 0 and ≤ 1, with probability at least 1− δ,
DT ( W̃ ,W ) ≤ √ log ( 2n1n2 δ + e ) eKT ,
in which KT = K ( 1 + TK ) .
Proof of Theorem 3. Using the fact DT (W,W) ≤ DT ( W, W̃ ) + DT ( W̃ ,W ) ,
the thesis is immediate from Theorems 14 and 15.
C.2 PROOF OF THEOREMS 14 AND 15
Proof of Theorem 14. In the following, let Kt denote an generic positive constant that may change from line to line and takes the form
Kt = K ( 1 + tK ) ,
such that Kt ≥ 1 and Kt ≤ KT for all t ≤ T . We first note that at initialization, D0 ( W, W̃ ) = 0.
Since 9W90 ≤ K, 9W9T ≤ KT by Lemma 11. Furthermore it is easy to see that 9W̃90 ≤ 9W90 ≤ K almost surely. By the same argument as in Lemma 11, 9W̃9T ≤ KT almost surely. We shall use all above bounds repeatedly in the proof. We decompose the proof into several steps.
Step 1 - Main proof. Let us define, for brevity
q3 (t, x) = H3
( x; W̃ (t) ) −H3 (x;W (t)) ,
q2 (t, x, j2, c2) = H2 ( x, j2; W̃ (t) ) −H2 (x, c2;W (t)) ,
q∆ (t, z, j1, j2, c1, c2) = ∆ H 2 ( Z, j2; W̃ (t) ) w̃2 (t, j1, j2)−∆H2 (z, c2;W (t))w2 (t, c1, c2) .
Consider t ≥ 0. We first bound the difference in the updates between W and W̃ . Let us start with w3 and w̃3. By Assumption 1, we have:∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ KEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] . Similarly, for w2 and w̃2,∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2))
∣∣∣∣ ≤ KEZ
[∣∣∣∆H2 (Z, j2; W̃ (t))−∆H2 (Z,C2 (j2) ;W (t))∣∣∣] +K |w3 (t, C2 (j2))| |w̃1 (t, j1)− w1 (t, C1 (j1))| ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|]
+Kt (|w̃1 (t, j1)− w1 (t, C1 (j1))|+ |w̃3 (t, j2)− w3 (t, C2 (j2))|) ≤ KtEZ [|q3 (t,X)|+ |q2 (t,X, j2, C2 (j2))|] +KtDt ( W, W̃ ) ,
and for w1 and w̃1, by Lemma 12,∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣
≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣
+ EC2 [∣∣∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1 (j1) , C2)|] |w̃1 (t, j1)− w1 (t, C1 (j1))|
≤ KEZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣
+KtDt ( W, W̃ ) .
To further the bounding, we now make the following two claims:
• Claim 1: For any ξ > 0,
max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ ≤ Kt+ξξ,
and similarly,
max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t+ ξ, j2)− ∂∂tw̃3 (t, j2) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t+ ξ, j1, j2)− ∂∂tw̃2 (t, j1, j2) ∣∣∣∣ ≤ Kt+ξξ,
max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t+ ξ, j1)− ∂∂tw̃1 (t, j1) ∣∣∣∣ ≤ Kt+ξξ.
• Claim 2: For any γ1, γ2, γ3 > 0 and t ≥ 0,
max { max j2≤n2 EZ [|q2 (t,X, j2, C2 (j2))|] , EZ [|q3 (t,X)|] ,
max j1≤n1
EZ ∣∣∣∣∣∣ 1n2 n2∑ j2=1 EC2 [q∆ (t, Z, j1, j2, C1 (j1) , C2)] ∣∣∣∣∣∣ }
≥ Kt ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 ) ,
with probability at most
n1 γ1 exp
( −n2γ 2 1
Kt ) + n2 γ2 exp ( −n1γ 2 2 Kt ) + 1 γ3 exp ( −n2γ 2 3 Kt ) .
Combining these claims with the previous bounds, taking a union bound over t ∈ {0, ξ, 2ξ, ..., bT/ξc ξ} for some ξ ∈ (0, 1), we obtain that
max { max j2≤n2 ∣∣∣∣ ∂∂tw̃3 (t, j2)− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣ ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw̃2 (t, j1, j2)− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣ ,
max j1≤n1 ∣∣∣∣ ∂∂tw̃1 (t, j1)− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣ }
≤ KT ( Dt ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) , ∀t ∈ [0, T ] ,
with probability at least
1− T + 1 ξ [ n1 γ1 exp ( −n2γ 2 1 KT ) + n2 γ2 exp ( −n1γ 2 2 KT ) + 1 γ3 exp ( −n2γ 2 3 KT )] .
The above event in turn implies Dt ( W, W̃ ) ≤ KT ∫ t 0 ( Ds ( W, W̃ ) + γ1 + γ2 + γ3 + ξ ) ds,
and hence by Gronwall’s lemma and the fact D0 ( W, W̃ ) = 0, we get
DT ( W, W̃ ) ≤ (γ1 + γ2 + γ3 + ξ) eKT .
The theorem then follows from the choice
ξ = 1
√ nmax , γ2 = KT√ n1
log1/2 (
3 (T + 1)n2max δ + e
) , γ1 = γ3 =
KT√ n2
log1/2 (
3 (T + 1)n2max δ + e
) .
We are left with proving the claims.
Step 2 - Proof of Claim 1. We have from Assumption 1, ess-sup |w3 (t+ ξ, C2)− w3 (t, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw3 (s, C2) ∣∣∣∣ ds ≤ Kξ,
ess-sup |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw2 (s, C1, C2) ∣∣∣∣ ds ≤ K
∫ t+ξ t ess-sup |w3 (s, C2)| ds
≤ Kt+ξξ, ess-sup |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ K ∫ t+ξ t ess-sup ∣∣∣∣ ∂∂tw1 (s, C1) ∣∣∣∣ ds ≤ K
∫ t+ξ t ess-sup |w3 (s, C2)w2 (s, C1, C2)| ds
≤ Kt+ξξ.
By Lemma 12, we then obtain that
ess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ, EZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] ≤ Kt+ξξ,
ess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣] ≤ Kt+ξξ.
Using these estimates, we thus have, by Assumption 1,
max j2≤n2 ∣∣∣∣ ∂∂tw3 (t+ ξ, C2 (j2))− ∂∂tw3 (t, C2 (j2)) ∣∣∣∣
≤ Kt+ξξ +KEZ [|H3 (X;W (t+ ξ))−H3 (X;W (t))|] +Kess-supEZ [|H2 (X,C2;W (t+ ξ))−H2 (X,C2;W (t))|] ≤ Kt+ξξ,
max j1≤n1, j2≤n2 ∣∣∣∣ ∂∂tw2 (t+ ξ, C1 (j1) , C2 (j2))− ∂∂tw2 (t, C1 (j1) , C2 (j2)) ∣∣∣∣
≤ Kt+ξξ +Kess-supEZ [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣]
+Kess-sup |w3 (t, C2)| |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ,
max j1≤n1 ∣∣∣∣ ∂∂tw1 (t+ ξ, C1 (j1))− ∂∂tw1 (t, C1 (j1)) ∣∣∣∣
≤ Kt+ξξ +Kess-supEZ [ EC2 [∣∣∆H2 (Z,C2;W (t+ ξ))−∆H2 (Z,C2;W (t))∣∣ |w2 (t, C1, C2)|]] +Kess-supEC2 [|w3 (t, C2)| |w2 (t+ ξ, C1, C2)− w2 (t, C1, C2)|] +Kess-supEC2 [|w3 (t, C2)w2 (t, C1, C2)|] |w1 (t+ ξ, C1)− w1 (t, C1)| ≤ Kt+ξξ.
The proof of the rest of the claim is similar.
Step 3 - Proof of Claim 2. We recall the definitions of q∆, q2 and q3. Let us decompose them as follows. We start with q2:
|q2 (t, x, | 1. What is the focus of the paper regarding theoretical properties of neural networks?
2. What are the strengths of the proposed approach, particularly in proving global convergence?
3. Are there any similar ideas to neuronal ensemble in analyzing the dynamics of the MF limit?
4. How can the expression of K in the upper bound of Theorem 3 be made more concrete?
5. Can the convergence rate be derived under additional assumptions of convexity in Theorems 8 and 10?
6. What are the challenges in extending the analysis of global convergence to multi-layer or deep neural networks using the proposed approach? | Review | Review
This paper studies some theoretical properties of three-layer neural networks (NNs) under the mean-field (MF) regime. The authors proposed neuronal embedding in order to study large-width neural networks. Then, the quantitative relation between finite-width NN and the MF limit was clarified. The global convergence of the continuous limit of the stochastic gradient descent (SGD) was proved without assuming the convexity of the loss function. The global convergence of the MF limit is used to establish the optimization efficiency of the neural network with SGD.
The problem considered in this paper is important. The definitions and the statement of theorems are clearly described. In order to prove the global convergence, the authors assumed the uniform approximation property rather than the convexity of the loss function. This approach is interesting. Though I'm not very familiar with the MF regime, the high-level idea to prove the theorem is well-written. I read some proofs in the appendix, and I found that the description is accessible to a wide range of audiences. Overall, this paper is thought to provide a promising idea to analyze not only three-layer NNs but deep NNs.
Some comments are shown below:
Neuronal ensemble is introduced to analyze the dynamics of the MF limit. Apart from the MF regime, is there any similar idea of the neuronal ensemble? Showing some references would be beneficial to readers.
The constant K appears in the upper bound in Theorem 3. Showing a more concrete expression of K is informative. What is the typical K in this case?
In Theorem 8 and Corollary 10, the global convergence was proved. If the convexity was also assumed in addition to the assumptions in the theorems, is it possible to derive the convergence rate?
In the paper, the global convergence of three-layer NNs was analyzed. What is the main obstacle to investigating the global convergence of multi-layer NNs or deep NNs according to the idea proposed in this paper? |
ICLR | Title
Fast and Differentiable Matrix Inverse and Its Extension to SVD
Abstract
Matrix inverse (Minv) and singular value decomposition (SVD) are among the most widely used matrix operations in massive data analysis, machine learning, and statistics. Although well-studied, they still encounter difficulties in practical use due to inefficiency and non-differentiability. In this paper, we aim at solving efficiency and differentiability issues through learning-based methods. First of all, to perform matrix inverse, we provide a differentiable yet efficient way, named LD-Minv, which is a learnable deep neural network (DNN) with each layer being an L-th order matrix polynomial. We show that, with proper initialization, the difference between LD-Minv’s output and exact pseudo-inverse is in the order O ( exp { −L }) , where K is the depth of the LD-Minv. Moreover, by learning from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
N/A
}) , where K is the depth of the LD-Minv. Moreover, by learning
from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
1 INTRODUCTION
Matrix inverse (including matrix pseudo-inverse) and singular value decomposition are fundamental linear algebra operations, ubiquitous in machine learning, statistics, signal processing, and other fields. In general, solving scientific computing or optimization problems often need to perform these two operators, such as matrix (pseudo) inverse for least squares regression, singular value decomposition (SVD) for dimensionality reduction (PCA), the low-rank related problems (Liu et al., 2010; Zhang et al., 2018; Liu et al., 2013), the graph-based issues (Wu et al., 2020a; Von Luxburg, 2007), and even for training deep neural networks (DNNs) with structured layers (Ionescu et al., 2015). Nevertheless, Minv and SVD appear less and less often in the modern machine learning tasks. One reason is inefficiency. Computing the SVD and the Minv can be extremely time-consuming for large-scale problems; however, efficiency is a significant concern in the current big data and deep learning era. Besides, non-differentiability is considered another reason that blocks the use of SVD and Minv. Usually, most prevalent methods for training DNNs are first-order and are based on the backpropagation; however, Minv and SVD are not necessarily continuous functions of the matrix entries (Stewart, 1969). Therefore, derivatives are not always existent. Although Minv and SVD may be backprop-able due to some specific implementation, they are unstable and are essentially nondifferentiable. Thus the gradients cannot pass through them when backpropagating. Considering the above problems, one natural question emerges.
Does there exist an efficient and differentiable way to perform Minv and SVD?
Over the last decade, many sketches-based methods have been developed, e.g., Nelson & Nguyen (2013); Meng & Mahoney (2013); Cohen et al. (2015); Indyk et al. (2019). The main idea of the sketches-based techniques is to use random projections, which is efficiently computable, to reduce the problem size before performing SVD and Minv. However, they do not solve the problem of non-differentiability as smaller-sized SVD and Minv still needs to be computed.
Recently, “differentiable learning-based” methods (D-LbM) have attracted considerable attention (Chen et al., 2018; Indyk et al., 2019; Liu et al., 2019; Wu et al., 2020b; Xie et al., 2019). These methods usually unroll the classical optimization or numerical iterative algorithms and introduce the learnable parameters to obtain a learnable DNN. In general, the iterative algorithms inspired DNNs consist of differentiable operators such as matrix polynomials. Benefiting from training on the data, D-LbMs can execute in much fewer iterations with similar per-iteration cost as original algorithms but obtain much better performance. Many empirical results, e.g., Gregor & LeCun (2010); Yang et al. (2016); Peng et al. (2018), show that a well-trained DNN provided by D-LbM—compared with the original optimization algorithm—can obtain an almost equally good solution using one or even two order-of-magnitude fewer iterations. Based on these observations, we aim to find a learning-based iterative way to differentiate Minv and SVD in this paper.
However, all these D-LbMs suffer from two common problems. One is the convergence guarantee. The forward process of D-LbMs may diverge, even if the original unrolled iterative algorithm has a well-behaved convergence guarantee. In fact, most D-LbM methods have little or no convergence guarantees. To the best of our knowledge, there is no work to reveal the behavior of D-LbM during training. How does the training loss decrease? How big is the gap between the output of D-LbM and that of the original algorithm? All these questions are still open. Another problem is, both for D-LbM and the original algorithm, a limited theory exists when the input data obey a common distribution (e.g., data drawn from a low-dimensional manifold). Essentially, by learning from data, D-LbMs can obtain a problem-dependent parameter bias. It can help D-LbMs get the better performance on a specific data distribution within much fewer computation cost, instead of simply fixing the parameter ahead of time like traditional iterative methods. But there is no mathematical result to describe this phenomenon strictly. Moreover, it is unknown whether the trained D-LbMs generalize well on data from the same distribution.
Remarkably, in this paper we provide a learnable, differentiable and efficient way to perform Minv, named Learnable Differentiable Matrix Inverse (LD-Minv) and solve the above two problems by our proposed LD-Minv. First of all, LD-Minv is a DNN with each layer being an L-th order matrix polynomial; and the coefficient of each power is learnable. We show that LD-Minv converges to the Minv or pseudo-inverse with order L if the coefficients of polynomial are set properly. Namely, the difference between the output of the DNN and the exact pseudo-inverse is in the order O ( exp { −LK }) , where K is the depth of LD-Minv. Secondly, by learning from data, LD-Minv can further improve the and precision on some specific data distribution. Namely, the distance between the output and the exact pseudo-inverse can be arbitrarily small. Specifically, the training loss converges to zero exponentially fast, i.e., gradient descent (GD) finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations, where n is the data size. Finally, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings.
With LD-Minv at hand, as a direct application, we propose a learning-based optimization to solve any convex problems with non-convex orthogonality constraints. Then we use it to differentiate SVD. Note that we also provide a generalization guarantee for our D-SVD. Unlike the previous work on differentiable SVD (Indyk et al., 2019), which is based on the power method and needs an assumption on the gap between singular values to ensure convergence, our method can converge to the solution without any gap assumption. In summary, our main contributions include:
• We propose a differentiable method, LD-Minv, to perform matrix inverse efficiently. LDMinv, inspiring a DNN with each layer being a learnable L-th order matrix polynomial, would be useful for accelerating and differentiating general matrix decompositions. We show that LD-Minv can converge to the matrix pseudo-inverse with order L.
• By learning, LD-Minv can further improve the approximation performance on some underlying data distribution. We prove that GD finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations under mild conditions. Moreover, we also provide the generalization bound for LD-Minv. We reveal that the empirical Rademacher complexity of the loss function class is bounded by Õ(min{d3L/K1/2, √ d3K/n}), where Õ(·) hides
log factors, and K and d are the depth and the width of LD-Minv, respectively.
• As a direct application, we further provide a learning-based general framework to solve the problems with orthogonality constraints. This D-LbM helps us to differentiate SVD. Finally, we also provide a generalization guarantee for our D-SVD.
2 DIFFERENTIABLE MATRIX INVERSE
We introduce the proposed D-Minv in this section. We first present an intuitive idea to show how to approximate the matrix inverse iteratively. Then we generalize the iterative method to obtain our fixed and learnable D-Minv, separately
2.1 FIXED DIFFERENTIABLE MATRIX INVERSE
Given a matrix A ∈ Rd×d, we want to approximate its inverse A−1 by the matrix X ∈ Rd×d, i.e., AX ≈ I, where I ∈ Rd×d is the identity matrix. Ideally, X is the fixed point of the following equation:
X = X(AX) −1 = X ( ∞∑ l=0 (I−AX)l ) , (1)
where the last equality follows from the Neumann series when AX ≈ I and (I−AX)l is the l-order matrix polynomial.
Inspired by the above iterative process, it is obvious that we can obtain the matrix X iteratively. Hence, we consider the following higher-order iterative method, which is an L-th order matrix polynomial in each iteration, where L ≥ 4:
Xk+1 = Xk ( L−1∑ l=0 Elk ) , Ek = I−AXk. (2)
Note that DNNs can easily implement this method: one layer corresponds to one iterative step. Since there are no parameters to learn in Eq. (2), and it only involves the matrix multiplication (thus differentiable), we name it fixed D-Minv 1. One may doubt that the computational cost of each iteration is high in Eq. (2). However, as shown in Lemma 1, higher-order polynomial usually implies a faster convergence rate. For obtaining the same precision with different L, the total computational cost is in the order of O(L/ ln(L)), which is a very slow growth rate w.r.t. L. Although simple, fixed D-Minv converges to the inverse A−1 or A† extremely fast. Lemma 1 (Approximation Speed). The generated sequence {Xk ∈ Rd1×d2} by Eq. (2) converges to the Moore–Penrose inverse A† with order L provided that X0 = 1c0 A >, c0 > 12‖A‖ 2, i.e., we can conclude: ∥∥AA† −AXk∥∥ = eLk0 , in which e0 = ∥∥AA† −AX0∥∥ < 1, where ‖·‖ is the spectral norm.
We can see that fixed D-Minv converges with order L, i.e., a shallow D-Minv can approximate the matrix inverse very well. Moreover, provided with the X0 given in Lemma 1, the column space and the row space of the matrix Xk are exactly correct from the beginning. Proposition 1 (Invariant Column and Row Spaces). With the same setting in Lemma 1, for any k ≥ 0, it holds that:
XkAA † = Xk, A †AXk = Xk.
By Proposition 1, it is easy to see that the zero singular values remain unchanged during computing, which indicates that D-Minv works consistently on the full and rank-deficient square and rectangle matrices. Note that this proposition also holds for the upcoming LD-Minv.
2.2 LEARNABLE DIFFERENTIABLE MATRIX INVERSE
We consider the learnable D-Minv (LD-Minv) with the following iterative formula:
Xk+1 = Xk ( L−1∑ l=0 C{k,l}E l k ) , Ek = I−AXk, L ≥ 4, (3)
1Higher-order method Eq. (2) is already well-known in the applied mathematics community in another equivalent form, see Climent et al. (2001); Amat et al. (2003); Li et al. (2011).
where (k, l+ 1) ∈ [K]× [L], C{k,l} ∈ R are learnable coefficients, and ‖·‖F is the Frobenius norm. LD-Minv is also in the category of LbMs that unroll the conventional numerical iteration methods. As discussed in the introduction, two problems, i.e., no convergence analysis for the training procedure and no generalization guarantee for the trained DNN, block the further theoretical exploration of these LbMs. However, in contrast to the previous works, we obtain favorable theoretical results on both of the two problems for our LD-Minv. First, although fixed D-Minv already converges extremely fast, LD-Minv can still perform much better on the training data under mild conditions. Specifically,
∥∥XK −A†∥∥ converges to zero exponentially fast during training, i.e., GD finds an - error minimum in `2 regression using at mostO(nKL log 1/ ) iterations 2. Second, LD-Minv has a tight generalization bound to guarantee the performance on the unseen matrix when it comes from the same distribution as the training instances. We provide the rigorous theoretical results in Sec. 4.
Discussion 1. One can find that the final iterate XK can be written as matrix polynomial of A, and may argue that approximating the Minv by the polynomial is well-studied. Zero training can be obtained when the polynomial’s order is large enough, and the approximation error controls the generalization error. For the sake of distinction, we make three clarifications. First of all, most traditional results only describe the behavior in the worst case, which holds for any input matrix and may easily fail in some extreme cases. However, LD-Minv focuses on a more realistic situation, i.e., the input matrices obey some common distribution, such as the affinity matrices of Facebook users or the sampled CT and MRI images. How to learn from the data and design a more efficient method is still open, not to mention the theoretical generalization guarantee on specific data distribution. Secondly, the traditional polynomial approximation method is susceptible to the polynomial order and is very easy to over-fitting when the order is over-parameterized, say order nd. Without precise data distribution density, it is impossible to choose the proper order to ensure zero training and guarantee the small generalization error, simultaneously. To solve this, we may need some complex and data-driven regularization strategy. In sharp contrast, LD-Minv is robust to the order, and our theoretical results (i.e., zero training and small generalization error) hold well from underparameterized to over-parameterized cases. Thirdly, the exact polynomial fitting can only imply the existence of zero training solution, while our results describe the convergence behavior (i.e., which solution we will choose) . Obviously, there exists a big gap between the convergence of training and the existence of a solution.
Note that we have considered a much easier matrix polynomial: XK = ∑L l=0 ClA
l. Learning the coefficients {Cl}Ll=1 becomes an easy-to-solve convex regression problem unlike the convex one in Eq. (3). Unfortunately, we failed due to some stability issues, e.g, the coefficients in different powers vary significantly which cause the poor generalization performance.
Discussion 2. Most matrix iterative methods suffer from the ill-condition problem, which usually brings instability and makes the analysis and calculation hard to carry on, e.g., for D-Minv, large condition number may significantly slow the convergence speed (see Lemma 1). However, learningbased method can greatly alleviate this ill-condition problem by learning from data. LD-Minv can beat D-Minv easily when the condition number of the input matrix is large and the learned coefficients for LD-Minv are also valid on the unseen data. See the experimental part for more details.
2.3 TRAINING SETTINGS
We now describe some training settings for D-Minv. Denote by {Ai}ni=1 the training set consisting of n samples. Let X{k,i} ∈ Rd1×d2 be the output of the k-th layer on the i-th training sample Ai ∈ Rd2×d1 . We consider a typical regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥AiX{k,i}Ai −Ai∥∥2F , (4) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters. Note that when X = A†, the properties of the Moore–Penrose inverse indicate the equation AXA = A, which can further indicates the equation XAX = X in our invariant space case, see Proposition 1. Therefore, we only use the difference term (AXA−A) in the training loss.
2Please distinguish the two “convergence” here. One describes Xk approaching A† w.r.t. k and L, and the other refers to the behavior of LD-Minv’s loss w.r.t. training iteration.
Algorithm 1 Gradient descent (GD) with proper initialization for LD-Minv Input: Training data {Ai}ni=1, number of iterations T , step size η.
1: Generate each C{k,l} ∈ C such that |C{k,l} − 1| = O(K−α), where α > 14 . 2: Set X{0,i} = 1c0 A > i or X̃i, where c0 > 1 2‖Ai‖
2, X̃i is the output of a fixed D-Minv. 3: for t = 1, · · · , T do 4: Update C(t+1) = C(t) − η · D
[C=C(t)] Ln.
5: end for Output: C(0), . . . ,C(T ).
Thanks to the differentiability of LD-Minv, we adopt GD to minimize the training loss to obtain the proper coefficients C. We present the initialization strategy and training process in Algorithm 1, where the first derivative of differentiable function Ln(C) : RK×L → R at C(t) is denoted by D[C=C(t)]Ln(·) := ( ∂Ln(C(t)) / ∂C ) ∈ RK×L. When K is large, one may notice that our
coefficient is close to the oracle value 1, which may provide a relatively good initial loss L ( C(0) ) . Notably, this does NOT mean that we will treat the learning and initialization as a perturbation of the fixed D-Minv and bound the loss by the good-enough fixed D-Minv’s performance plus the perturbation error. On the contrary, as we will show in theoretical results part, learnability indeed matters, and LD-Minv can obtain better performance than the fixed one by training. The real purpose of this initialization is to take advantage of D-Minv’s local Lipschitz continuity near C = 1, where 1 is the all one vector. The continuity reduces the complexity of optimization and will benefit the generalization of D-Minv.
3 LEARNING-BASED OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS
Considering the following general orthogonality constrained optimization problem:
min U
f(U), s.t. U>U = I, (5)
where the objective function f(U) : Rm×r → R is differentiable and convex. The usual way to solve Eq. (5) is to perform the manifold GD on the Stiefel manifold, which evolves along the manifold geodesics. Specifically, manifold GD first updates the variable in the manifold tangent space along the objective function’s projected gradient. Then, map the updated variable in the tangent space to a feasible point on the geodesic, and repeat these two steps until convergence (Edelman et al., 1998). Usually, the mapping step is non-differentiable and inefficient. Fortunately, the work (Wen & Yin, 2013; Ren & Lin, 2013) develops a technique to solve the optimization problem with orthogonal constrains approximately, which only involves matrix multiplication and inverse.
We let U ∈ Rm×r, where r is the Stiefel manifold’s dimension. Denote by G ∈ Rm×r the gradient of the objective function f(U) in Eq. (5) w.r.t. U at Uk, then the projection of G in the tangent space of the Stiefel manifold at Uk is PUk3, where P = GU>k −UkG> and P ∈ Rm×m. Instead of parameterizing the geodesic of the Stiefel manifold along direction P using the exponential function, we generate the feasible points to update Uk by the following Cayley transform (Wen & Yin, 2013):
Uk+1 = U(t) = C(t)Uk, where C(t) = ( I + t
2 P
)−1( I− t
2 P
) , (6)
where I is the identity matrix and t ∈ R is the step size used for updating the current Uk. In other words, U(t) is a re-parameterized local geodesic w.r.t. t on the Stiefel manifold. One can easily verify that U(t) has the following properties, given U>k Uk = I: (1) ddtU(0) = −PUk, (2) U(t) is smooth in t, (3) U(0) = Uk, (4) U (t) > U (t) = I, ∀t ∈ R. It is evident that, if t is in a proper range, Uk+1 can lead to a lower objective function value than U(0) = Uk on the Stiefel manifold. Besides, computing Uk+1 only involves the matrix multiplication and matrix inverse, which can be easily performed by our LD-Minv in Eq. (3. Therefore, we let C(t) = LD-Minv (I + tP/2) in Eq. (6). Then we can obtain a learning-based optimization method for the problem with orthogonality constraints in Eq. (5).
3We choose the canonical metric on the tangent space as the equipped Riemannian metric.
3.1 APPLICATION: DIFFERENTIABLE SVD BY LD-MINV
In this part, we show how to utilize the previous general learning-based optimization framework to differentiate SVD. Given an arbitrary matrix M ∈ Rm×n̂ and its skinny SVD M = ŨS̃Ṽ>, the Von Neumann’s trace inequality, 〈M,X〉 ≤ ∑ i σi(M)σi(X), implies that {Ũ ∈ Rm×r, Ṽ ∈ Rn̂×r} is an optimal solution of the following optimization problem:
min U,V f(U,V) :=
( 1
2 ‖Dc (Λ)‖2F −
〈 M,UV> 〉) , s.t. U>U = V>V = I, (7)
where Λ := U>MV, Diag(·) returns the collection of the diagonal entries of the input matrix, and Dc(Λ) := (Id−Diag)(Λ) := Λ−Diag (Λ) for any input matrix Λ. Note that solving the problem in Eq. (7) is equivalent to performing SVD. Let {M,U0,V0} be the inputs of our Differentiable SVD (D-SVD). We adopt the Cayley transform in Eq. (6) to solve the problem in Eq. (7), and replace the matrix inverse in it by our proposed LD-Minv in Eq. (3). W.l.o.g, in the following, we only focus on the updating strategy of U due to the similarity between U and V. We first compute the gradient of the objective function f(U,V) w.r.t. U at {Uk,Vk}, which we shall denote by G = MVk ( Dc ( V>k M >Uk ) − I ) . Then we find a geodesic curve U(t) along the gradient on the Stiefel manifold for updating Uk. Referring to Eq. (6), the curve is U(t) = ( I + t2P )−1( I− t2P ) Uk, where P = GU>k −UkG>. Given the step size t, we can use
LD-Minv to perform the matrix inverse on the factor ( I + t2P ) . In summery, D-SVD consists of two steps: (1) find a proper t; (2) update {Uk,Vk} by the Cayley transform. For finding a proper step size t, we consider the following problem:
t∗U = argmin 0≤t≤ε f(t) := f(U(t),Vk), (8)
where ε is a given parameter to ensure a small enough magnitude of t∗U . Notice that if t is small enough to hold ∥∥ t 2P ∥∥ < 1, then we have (I + t2P)−1 = I + ∑∞l=1 (− t2P)l, which implies that
U(t) = ( I + 2 ∑∞ l=1 ( − t2P )l) Uk. Considering that t∗ in Eq. (8) is small, we can approximate
f(t) via its second order Taylor expansion at t = 0:
f(t) = f(0) + f ′(0) · t+ 1 2 f ′′(0) · t2 +O
( t2 ) , (9)
where f ′(0) and f ′′(0) are the first and the second order derivatives of f(t) evaluated at 0, respectively. These two derivatives have closed form and can be computed efficiently. Consequently, we can obtain an approximated optimal solution t∗ via:
t∗U = min { ε, t̃ } , where ε < 2/‖P‖, and t̃ = −f ′(0)/f ′′(0), (10)
provided that:{ f ′(0) = 〈 Dc ( U>k MVk ) ,Dc ( U>k PMVk )〉 + 〈MVk,PUk〉 ,
f ′′(0) = ∥∥Dc (U>k PMVk)∥∥2F + 〈Dc (U>k MVk),Dc (U>k P2MVk)〉− 〈MVk,P2Uk〉 .
(11)
Then we can update U by U(t∗U ), i.e., Uk+1 = U(t ∗ U ). Thanks to the cyclic property of trace operator, Vk+1 shares a similar update strategy with Uk+1.
Provided the input {M,Uk,Vk}, the complete iterative steps for our D-SVD are as follows4: GU = MVk ( Dc ( V>k M >Uk ) − I ) , PU = GUU > k −UkG>U , t∗U = min { 2 ‖PU‖ ,− f ′(U(t),Vk) f ′′(U(t),Vk) } + t̃Uk , Uk+1 = Cayley (t ∗ U ,CUk ,PU ,Uk), GV = M >Uk+1 ( Dc ( U>k+1MVk ) − I ) , PV = GV V > k −VkG>V ,
t∗V = min
{ 2
‖PV ‖ ,− f
′(Uk+1,V(t)) f ′′(Uk+1,V(t))
} + t̃Vk , Vk+1 = Cayley (t ∗ V ,CVk ,PV ,Vk),
(12)
4For convenience and clearing writing, we omit the superscript in the updating rules.
Algorithm 2 Forward Propagation for D-SVD Input: Training data {Mi,Ui,0,Vi,0}ni=1, depth Ksvd of D-SVD, number of iterations Tinv, step
size ηinv and D-SVD’s current parameters t̃. 1: for k = 0, . . . ,Ksvd do 2: Calculate PU,i and t∗U,i with {Mi,Ui,k,Vi,k}ni=1 by Eq. (12). 3: Train LD-Minv with parameters CUk by Algorithm 1 with the data {Ai}ni=1, number of iterations Tinv and step size ηinv, where Ai = (I + t∗U,iPU,i/2). 4: Repeat Steps 2 and 3 by switching the roles of U and V with the input {Ui,k+1,Vi,k}ni=1. 5: end for
Output: {CUk ,CVk} Ksvd k=1 and {Ui,k+1,Vi,k} n,Ksvd i=1,k=1.
Algorithm 3 Joint Training for D-SVD and LD-Minv Input: Training data {Mi}ni=1, number of iterations Tinv and Tsvd, step sizes ηsvd and ηinv.
1: Generate {Ui,0,Vi,0}ni=1 randomly from the r-dimensional Stiefel manifold. Set t̃(0) = 0. 2: for t = 1, · · · , Tsvd do 3: Forward propagate by Algorithm 2. 4: Update t̃(t+1) = t̃(t) − ηsvd · D[t̃=t̃(t)]Mn, whereMn := 1 nKsvd ∑n i=1 ∑Ksvd k=1 f(Ui,kVi,k).
5: end for Output: {Ui,k+1,Vi,k}n,Ksvdi=1,k=1.
where Cayley (t,C,P,U) := LD-Minv (I + tP/2,C)(I− tP/2)U,
t̃ := {(t̃Uk , t̃Vk) ∈ R2} Ksvd k=1 is the collection of learnable parameters for D-SVD, and LD-Minv(·,C) is a LD-Minv module with parameters C. Note that a DNN can easily implement our D-SVD. Each layer implements the steps in Eq. (12), which are the specific iterative procedures of the previous general learning-based optimization with orthogonality constraints. We adopt different LD-Minvs for different layers, and provide the training process for D-SVD in Algorithms 2 and 3. By introducing LD-Minv into the Cayley transform, we bypass the non-differentiable exact Minv, and solve the problem in Eq. (7) (i.e., perform SVD) in a differentiable and learning-based way.
4 MAIN RESULTS
In this section, we provide the main theoretical results of D-Minv, including the linear convergence of GD during training and the generalization performance. All the detailed proofs of these theorems are provided in the supplementary material. The theoretical results for D-SVD are presented in the supplementary material due to limited space.
4.1 CONVERGENCE RATE OF GD
We consider the general LD-Minv and large training data size n in this section. We first make the following assumption on the training data.
Assumption 1 (Bounded Singular Values). Given the training matrices { Ai ∈ Rd1×d2 }n i=1
, we assume the training matrices’ positive singular value vectors are d-dimensional, where d ≤ min{d1, d2}. Assume these positive singular values have lower and upper bounds, i.e.,
‖xi‖∞ ≤ 1 and min j∈[d] [xi]j ≥ b̄a > 0, where xi = σ+(Ai) ∈ R d, ∀i ∈ [n],
and σ+(·) extract the positive singular values.
In general, the boundedness assumption for the singular values the is weak and easy to satisfy. See the discussion in the supplementary material (Section C.1.2).
With this assumption, we can obtain the convergence rate of our Algorithm 1.
Theorem 1 (Convergence Rate). Suppose Assumption 1 holds and let d = min{d1, d2}. For any t, > 0, we let K = Ω ( db̄2a ) , then with probability at least 1− exp ( −O ( K/b̄2a )) , GD in Algorithm
1 with step size η = Θ(1/dL) can find a collection of coefficients such that:
Ln(C(T )) < , for T = Θ ( dLKnb̄−2a ln (1/ ) ) ,
where b̄a is the lower bound for the positive singular values of training data Ai.
This is known as the linear convergence rate because drops exponentially fast in T . Recall that K and L are from the definition of our LD-Minv.
Remark 1. Different from the traditional theoretical results of DNNs, the randomness in our results mainly comes from a re-weighting strategy (see Sec. E) rather than initialization. In general, our results hold for any initialization strategy as long as the condition in Algorithm 1 is met. Notably, our results do not require the depth or the width to be in the polynomial order of data size n, which is a prevalent over-parameterization assumption for the current deep learning theories.
Remark 2. To present our convergence result most simply, we choose to focus on GD method with fixed step size mainly. It shall be easy to extend our results to more general settings, such as stochastic GD and dynamic step size.
4.2 GENERALIZATION
We characterize the generalization performance of LD-Minv trained by GD in this theorem.
Theorem 2 (Generalization Bound). Denote by PA the distribution of Ai in Assumption 1. Suppose that α ≥ 1 andK2α−1/2 = Ω( √ n), then with probability at least 1−δ, the iterate C(t) of Algorithm
1 holds that:
E A∼PA
[ Ln ( C(t) )] ≤ Ln ( C(t) ) + Õ ( min { b̄2wd 3L
Kα/2 ,
√ d3K
n
}) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , T , where EA∼PA [Ln] is the expected value of Ln under the probability measure PA and Õ(·) hides log factors.
We adopt two ways to upper bound the empirical Rademacher complexity, which corresponds to the cases K < n and K > n, respectively. The final bound is the minimum of them (middle term in the above upper bound).
Remark 3. The second term in our bound distinguishes our result from most previous work (AllenZhu et al., 2019; Arora et al., 2019; Yehudai & Shamir, 2019; Cao & Gu, 2019; Chen et al., 2019; Xie et al., 2020) on the generalization bounds of over-parameterized neural networks. Specifically, most over-parameterized work mainly focus on establishing the bound that does not explode when the network width goes to infinity. However, their bounds are exponential in the depth, which may not hold when the depth is large. In contrast, our result covers a wider range of depth K: it covers the case of both K < n and K > n. One may wonder whether our bound will explore when K approaches infinity. The answer is no. We can observe a “double descent” trend to certain extent: the generalization error bound first increases with the network depth K when K < n, then it starts to decrease when K becomes larger even for K n.
5 CONCLUSION
We provide a differentiable and learnable way, named LD-Minv, to perform the matrix inverse by using a L-th order matrix polynomials. Then, we offer some theoretical guarantees about the convergence during training and generalization ability in both under-parameterization and overparameterization setting for LD-Minv. Remarkably, we are the first to provide the strict analysis for the training process of the differentiable learning-based methods. As an application of LD-Minv, we propose a learning-based optimization method to solve the problems with non-convex orthogonality constraints, and then utilize it to differentiate SVD. A generalization guarantee for D-SVD is also provided.
APPENDIX
A EXPERIMENTAL RESULTS
A.1 EFFECTIVENESS VALIDATION
We conduct experiments on synthetic data to validate the effectiveness of our proposed LD-Minv and D-SVD. We run our all experiments for 10 times and report the average results.
A.1.1 EXPERIMENTS RESULTS FOR LD-MINV
Settings. For LD-Minv, we adopt the square and rectangle matrix to test its performance. It should be mentioned that our LD-Minv is adaptive to the shape of the input matrix. In general, the square matrix can better test the performance of the algorithm, since the minimum singular value is near 0 in this case (Vershynin, 2010). And a large condition number of the input may make most numerical matrix calculation methods unstable. For the square matrix A ∈ Rd×d, its elements are sampled from i.i.d. Gaussian, namelyAi,j ∼ N (0, 1/ √ d). For the rectangle matrix A ∈ Rm×n̂, its elements
are also sampled from Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We adopt the SGD to optimize the parameters. Note that our theoretical results in Sec. 4 hold for GD, but they can easily extend to the SGD case. The learning rate is set to 1e-4, and it is reduced by half every 300 iterations. We also fix the batch size as 100, while the number of iteration is set to 1200. Note that, in the experiments, the adopted exact Pseudo matrix inverse (Minv) is implemented by Pytorch.
We adopt the following test loss to measure the performace:
Test Loss := 1
n n∑ i=1 ∥∥AiX{Kinv,i}Ai −Ai∥∥2F , where X{Kinv,i} is the final output of D-Minv or LD-Minv. Here n = 100 is the size of test data. With d = 100, we first test the influence of L and K, which correspond to the order of matrix polynomial and the number of layers in our LD-Minv, respectively. Due to high precision of LDMinv and D-Minv, for visualization, we take a base-10 logarithm on loss in this experiment.
Results. From Fig. 1(a), we can see that the number of layer K plays a more critical role for LDMinv than the order L of polynomial. When settingK = 7, the log of the test error drops from−1.0 for L = 4 to −4.6 for L = 9. However, the log of the test error drops from 0.32 for K = 2 to −4.6 for K = 7, when fixing L = 9. Hence, for efficiency, we suggest utilizing large K and small L for LD-Minv.
Compared with D-Minv, according to Fig. 1(b) and Fig. 1(c), LD-Minv can obtain a better test performance even for largeK andL, although D-Minv converges to Minv extremely fast in this case. Our result demonstrates that learning can help LD-Minv obtain better performance on a problemdependent data distribution rather than merely fixing the coefficients ahead for time. We notice that LD-Minv and D-Minv obtain a similar precision when K > 15, and the reason is that the approximation error for D-Minv is in the order of 1e− 8 in this case, which lifts very limited space for the improvement by learning.
Fig. 1(b) further verifies the importance of K. As shown in our theoretical results (Theorem 1 and Theorem 2); large K indicates smaller generalization bound and higher probability to obtain the linear convergence rate for Algorithm 1.
Another significant advantage of our proposed method is efficiency. D-Minv and LD-Minv can inverse the input matrix with high precision meanwhile with the low computational cost. The benefits of D-Minv and LD-Minv are apparent when the matrix dimension is large. As shown in Table 1, even for large K and L, the inference time of LD-Minv is less than a tenth of the time of exact matrix inverse when the size is 1000 × 1500. It is well-known that the Minv and matrix multiplication (MatMul) share the same computation complexity theoretically, while their speeds vary a lot numerically. The computation of our method is mostly spent on MatMul, which can be easily accelerated by GPU and is much cheaper than Minv. MatMul and Minv only share the same complexity asymptotically. The constants in the orderO(·) vary significantly, especially when matrix dimension d ∞. Moreover, the general MatMul algorithm with a complexity of O ( d3 )
isn’t optimal, and there exist algorithms that own a complexity of only O ( d2.3 ) .
To further investigate the importance of the learning procedure, we conduct the experiments on the matrices with different condition numbers. Given the condition number κ ≥ 1, we generate the data matrix by Ai = UiSiVi, where Ui and Vi are randomly sampled from a 100-dimensional
Stiefel manifold. Here Si is a diagonal matrix whose diagonal entries are i.i.d. and obey a uniform distribution U(1/κ, 1) with distribution boundaries being [1/κ, 1]. We let K = 4, L = 10 and the batch size be 100. We show the results in Table 2. Not surprisingly, D-Minv suffers from ill-condition problems, e.g., the large condition number deteriorates the performance of D-Minv. However, LD-Minv is not sensitive to the condition number, and it can beat D-Minv easily when the condition number is large. Note that all the results are reported on the unseen data.
A.1.2 EXPERIMENTAL RESULTS FOR D-SVD
Settings. For D-SVD, we adopt the rectangle matrix to run the experiments. Similarly, our algorithms also work for the square matrix. For the rectangle matrix A ∈ Rm×n̂, its elements are also sampled from i.i.d. Gaussian, namely Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We let m = 50 and n̂ = 100, and utilize Algorithm 3 to solve our learning problem5. Similarly, we also perform the batch training with batch size 30. The learning rate is set to 1e − 2 for D-SVD and 1e − 4 for LD-Minv contained in D-SVD. We do not decay the learning rate during training, and let the number of iterations for D-SVD and LD-Minv be 200 and 100, respectively. For LD-Minv in each layer, we let L = 4 and K = 6.
Same as Sec. 3, we adopt the following loss to train our D-SVD:
Training Loss := 1
nKsvd n∑ i=1 Ksvd∑ k=1 ( 1 2 ‖Λi,k −Diag (Λi,k)‖2F − 〈 Mi,Ui,kV > i,k 〉) ,
where Ui,k and Vi,k are the k-th layer’s output of D-SVD, and Λi,k := U>i,kMiVi,k. The reason that we introduce the regularization term ∥∥∥U>i,kMiVi,k −Diag (U>i,kMiVi,k)∥∥∥2
F is the nonuniqueness. In general, any pair (U,V) ∈ { (ŨR, ṼR) : M = ŨS̃Ṽ>, R>R = RR> = I } can
maximize the function (〈 M,UV> 〉) . However, in order to obtain the exact singular vectors of M,
the matrix ( U>i,kMiVi,k ) should have zero off-diagonal entries. Thus, the regularization term is necessary for the training of D-SVD.
We use the following test loss to report the results:
Test Loss := 1
n n∑ i=1 ( 1 2 ‖Λi,Ksvd −Diag (Λi,Ksvd)‖ 2 F − 〈 Mi,Ui,KsvdV > i,Ksvd 〉) ,
where Ui,Ksvd and Vi,Ksvd is the final layer’s output of D-SVD, and Λi,Ksvd := U > i,Ksvd MiVi,Ksvd . Note that we also utilize the following quantity to measure the test performance.
SVD MSE := 1
n n∑ i=1 ∥∥Sort (Diag (U>i,KsvdMiVi,Ksvd))− SVD (Mi)∥∥2F , where Sort(·) is the operator that sort the input elements in the descending order, Diag(·) returns the collection of the entries at the diagonal of the input matrix, and SVD(·) gives out the singular values.
For convenience, we add the mean of the sum of SVD(Mi) to the loss in Table 3 to make the loss non-negative. Although increasingKsvd also can improve the performance in the non-learning case,
5We replace the GD part by SGD for the generality of the experiment.
learning can strengthen the role of Ksvd. Without training, the test loss for Ksvd = 60 is one eighth of the loss for Ksvd = 1; in sharp contrast, the reduction can reach one thousandth after training. Due to the introduced regularizer for the training loss, we can obtain a small SVD MSE rather than the small difference between the sum of singular values.
During the training of LD-SVD, we observe that our results are robust to the setting of K and L. In general, the forward process can converge even with very inexact matrix inverse (Li et al., 2019). Moreover, our introduced learnable step size { t̃Uk , t̃Vk } further provides the freedom to adjust the update direction for Uk and Vk. Thus, for efficiency, a relative small K and L is enough for each layer’s LD-Minv.
A.1.3 PLUG-AND-PLAY: DIFFERENTIABLE NON-BLIND DECONVOLUTION
Non-blind deconvolution (NbD) aims to restore the latent image z from corrupted observation y with known blur kernel b. In this experiment, we consider a well-known sparse coding formulation:
y = b⊗ z + n = BW>x + n,
where ⊗ is the convolution operator, x and n are the sparse code and unknown noises, respectively. B is the matrix form of kernel b, and W> is the inverse of the wavelet transform W (i.e., x = Wz and z = W>x). We utilize the following optimization to perform the non-blind deconvolution:
min x,z
g(x, z) := 1
2 ‖y −Bz‖22 + λ‖x‖1, s.t. z = W >x,
where λ > 0 is the balance parameter. A usual way to solve this problem is linearized ADMM (L-ADMM) (Lin et al., 2011), read as: ak = zk − αk ( λk + β ( zk −W>xk )) , zk+1 = ( αkBB > + I )−1( ak + αkB >y ) , bk = xk − γkW> ( λk + β ( zk+1 −W>xk )) , xk+1 = sgn (bk) max ( bk − γk λ , 0 ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(13) where αk > 0, γk > 0 and β > 0 are penalty parameters, is the Hadamard product, and λ is the Lagrange multiplier.
Differentiable NbD. Inspired by the Eq. (13), we can easily obtain the learning-based non-blind deconvolution method: ak = zk − αk Ĩk ( λk + β ( zk −W>xk )) , Tk = LD-Minv ( αkBB > + I,Ck ) zk+1 = Tk ( ak + αkB >y ) , bk = xk − γkW̃k ( λk + β ( zk+1 −W>xk )) , xk+1 = ReLU ( bk − γk λ ) − ReLU ( −bk − γk λ ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(14) where SNbD := {Ĩk,W̃k, αk, γk, β}KNbDk=1 is the collection of all learnable parameters, Ĩk and W̃k are introduced learnable matrices (in the implementation, we choose them as the convolution layer
Algorithm 4 Joint Training for D-NbD and LD-Minv Input: Training data {yi}ni=1, number of iterations TNbD and Tinv, step sizes ηNbD and ηinv.
1: Set xi,0 = yi, zi,0 = W>xi,0 and λ0 = 0. 2: for t = 1, · · · , Tsvd do 3: Fix the learnable parameters S(t)NbD and train each layer’s LD-Minvs by Algorithm 1. 4: Update S(t+1)NbD = S
(t) NbD − ηNbD · D[S(t)NbD=SNbD] Qn, where Qn is presented in Eq.(15). 5: end for
Output: {zi,k}n,KNbDi=1,k=1.
with compatible size), Ck is the learnable parameters for each layer’s LD-Minv, and ReLU(a) = max{a, 0} is the Rectified Linear Unit (ReLU) function. Note that
sgn (b) max (b− γ, 0) = ReLU (b− γ)− ReLU (−b− γ).
Eq. (14) can be implemented by a DNN. Given the training images {yi}ni=1, we adopt the following loss to train our D-NbD:
Qn(SNbD) := 1
nKNbD n∑ i=1 KNbD∑ k=1 ( g(xi,k,W >xi,k) ) , (15)
where xi,k is the k-th layer’s output of D-NbD. The training process for D-NbD is given in Algorithm 4.
Settings. The training and test images come from Schmidt & Roth (2014), in which 300 (random selected) have been used for training and the others are used for test. We add different levels of the Gaussian noise to generate our corrupted observations {yi}ni=1. We set the patch size as 160× 160, and let the batch size be 15. Finally, Adam is adopted and executed 1000 epochs for learning rate ranging from 1e-4 to 1e-6 (it is reduced by half every 140 iterations). The observations {yi} are corrupted by blur kernel with size ranging from 17× 17 to 57× 57 and the levels of Gaussian noise varying from 1% to 3%.
Results. From Table 4, we find that fL-ADMM is much more efficient than aL-ADMM. fL-ADMM does not need to perform the inverse balbala since αk, γk are fixed, which however is also why fLADMM performs poorly. It can not find the universal αk, γk for all test images. aL-ADMM can perform better with adaptive step size, but it needs exact Minv in each iteration. Compared to them, D-NbD outperforms L-ADMM by a large margin in terms of speed and performance. Note that, for one iteration, the computation complexity of fL-ADMM is much lower than the other three since it only involves the MatMul. However, it consumes more than 600 iterations to obtain a relatively good solution. In fact, just comparing the Minv time in one iteration for aL-ADMM and D-NbD, our methods’ time is only almost half of that of exact Minv in PyTorch. Fortunately, the learnability of DNbD helps us obtain a better or equally good solution using one order-of-magnitude fewer iterations. Hence, the total time can only be one-tenth of that of aL-ADMM. Moreover, the learnability of LDMinv further introduces more flexibility, which allows D-NbD(LD-Minv) to achieve better results within fewer layers. Thus, D-NbD(LD-Minv) is more efficient due to less calculation.
A.2 DISCUSSION
Not only D-NbD, based on the LD-Minv and D-SVD, but there are also rich potential applications, including image recovery or denoising, numerical algorithm acceleration, blind deconvolution, sparse coding, subspace clustering, etc. All these compelling applications may be beyond this paper’s scope, and we leave them as future work. Moreover, due to the differentiability, D-SVD can help us design more abundant singular-value-related training loss for DNN, which is impossible in the previous non-differentiable situation.
B NOTATION
Positive definite matrix A is denoted by A 0. Transpose of the matrix A ∈ Rm×n is A>. We denote by A−1 and A† the matrix inverse and Moore–Penrose inverse, respectively. The Euclidean inner product between two matrices A ∈ Rm×n and B ∈ Rm×n is defined as 〈A,B〉 := Tr ( A>B ) , where Tr(·) is the trace of a matrix. κ(A) := ‖A†‖‖A‖ is the generalized condition number of a matrix, where ‖·‖ is the spectral norm. We denote by σ(A) the singular values vectors (no need to be ordered) of the matrix A. Similarly, we denote by σ+(A) the positive singular values vectors.
Let ‖·‖F be Frobenius norm. Given a differentiable function the first and second derivative of F at Y are denoted by: linear operator D[X=Y]F(·) : Rm → Rn := ( ∂Fi(Y) ∂Xj ) (·) and quadratic
operator D2[X=Y]F(·, ·) : R m × Rm → Rn := ( ∂2Fi(Y) ∂Xj∂Xk ) (·, ·), respectively, where Xi is the i-th entry of X. We also denote by F−1 the inverse of the function F and let the composition map f ◦ g(x) := f(g(x)). is the Hadamard product and [K] = {1, 2, · · · ,K}.
C ADDITIONAL THEORETICAL RESULTS
In this section, we provide the additional theoretical results of D-Minv and D-SVD. We start by a warm-up case for D-Minv.
C.1 RESULTS FOR LD-MINV
C.1.1 WARM-UP: ONE PARAMETER FOR ONE LAYER
As a warm-up case, we consider only to learn one coefficient for each layer, as a special case of learnable D-Minv:
Xk+1 = Xk ( L−1∑ l=0 Elk + CkE L k ) , Ek = I−AXk, (16)
where Ck ∈ R are the learnable coefficients. By setting X0 = 1c0 A >, where c0 > ‖A‖2, and let Ck = 0 for all k, by Lemma 1, we know that ∥∥AA† −AXk∥∥ = eLk0 . One important question is if we let the coefficient of ELk to be learnable, by training, will D-Minv obtain a faster convergence rate?
In general, previous works show that learning-based methods share the same convergence order with their fixed version but with a better statistical constant. Interestingly, beyond the constant, we found that D-Minv can improve the order of convergence by learning. Lemma 2 (Learnability Matters). Assume these is only one training data A. Provided that X0 = 1 c0 A>, where c0 > ‖A‖2, then we have:
ek+1 = (1− Ck) · eLk + Ck · eL+1k , where ek = ∥∥AA† −AXk∥∥, and k ∈ [K], L ≥ 4.
Moreover, the gradient of L w.r.t. Ck is negative, i.e., D[Ck]L < 0 for all Ck ∈ [0, 1 + ], where > 0 depends on ek−1. Additionally, D[Ck]L ≈ 0 for k ∈ [K − 1] when Ck ∈ [1− , 1 + ].
Starting from 0, according to the gradient’s negativeness, the coefficients Ck for k ∈ [K − 1] will converge to the range [1 − , 1 + ] when adopting a proper step size for GD. In this case, i.e., Ck
are around 1 for all k, the sequence {Xk} converges to the Moore–Penrose inverse in L̃-th order, where L < L̃ ≤ L+ 1. In other words, by learning from one data, D-Minv finds a way to improve the order of convergence for any input. Remarkably, Ck = 1 for all k is not a global or even local minimum for the training loss L. For the learnable coefficient CK at the last layer, satisfying its first order condition usually implies obtaining the exact solution, i.e., eK+1 = 0. In summary, not only improve the convergence speed, learnable D-Minv makes the exact fitting come true.
C.1.2 DISCUSSION FOR ASSUMPTION 1
In general, the bounded assumption for the singular values the is weak and easy to satisfy. For example, singular values of the matrices with i.i.d. zero mean and unit variance entries asymptotically obey the Quadrant Law (Dozier & Silverstein, 2007; Shen, 2001), i.e., σ ∼ 1π √ 4− σ2, σ ∈ [0, 2], which implies the singular values have an upper bound. On the other hand, a well-known fact from the random matrix theory is a zero mean and unit variance random matrix is non-singular with high probability. Even for the square matrix A, with probability at least 1 − δ, we still we have σmin(A) ≥ δd−1/2 (Rudelson & Vershynin, 2008), where σmin(A) is the smallest singular value of the matrix A.
C.1.3 LIPSCHITZ SMOOTHNESS BEFORE GENERALIZATION
We would also like to remark that our generalization result can easily be extended to the stochastic case. The proof is the same as the Lemma 4.3 in Ji & Telgarsky (2019) or Theorem 3.3 in Cao & Gu (2019).
In general, smoothness plays an important role in the generalization analysis. The covering number of a smooth function usually is small. We start by showing the local smoothness of learnable DMinv.
Proposition 2 (Local Lipschitz smoothness). Define a set of coefficients collection: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
Let PA = AA†. Given any coefficients collection C ∈ C, we denote by Gk(·) the map from PAEk to PAEk+1 in (3), i.e., Gk(E) := PA−PA(I−E) (∑L−1 l=0 C{k,l}E l ) , and letHk(·) := R◦GK−1◦
· · · ◦ Gk(·), whereR(E) := ∑K k̂=k ∥∥Wk̂Ek̂A∥∥2F . Suppose ‖A‖2 ≤ 1 and ∀k ‖Wk‖ = O(1), then we have: ∣∣∣Hk(Ê)−Hk(Ẽ)∣∣∣ ≤ 2∥∥∥PAÊ−PAẼ∥∥∥
∗ , ∀k ∈ [K],
where Ê and Ẽ are with the same size and same norm upper bound as Ek, d is the dimension of positive singular value vector σ+(A), and ‖·‖∗ is the matrix nuclear norm.
In general, the composition of two Lipschitz continuous functions leads to a worse Lipschitz constant. However, our LD-Minv shares a consistent constant for all layers. This is important since the layer-wisely local smoothness usually implies a small covering number of the whole DMinv (Bartlett et al., 2017; Wei & Ma, 2019), which results in a tight generalization bound.
C.2 RESULTS FOR D-SVD
We characterize the generalization performance of D-SVD trained by GD in this theorem. First of all, we present several definitions. Let
Mn(t̃,C) := 1
nKsvd n∑ i=1 Ksvd∑ k=1 〈 Mi,Ui,kV > i,k 〉 ,
where (t̃,C) are the collection of the parameters of D-SVD and the LD-Minvs in each layer. Denote by f(U | V,M, t̃k,Ck) the operations in Eq. (12) that map Uk to Uk+1, i.e.,
Uk+1 := f(Uk | Vk,M, t̃k,Ck), (17)
where (t̃k,Ck) ∈ R×RLKinv are the learnable parameters of D-SVD and LD-Minv at the k-th layer of D-SVD. We define the coefficients set:
Ck := [ −εu
2 ,+ εu 2
] × { C : ∥∥∥C−C(0)∥∥∥
F = Õ ( 1√ Kinv )} ,
where εu := min
i
1
‖Pi‖ , in which Pi = MiVkU>k −UkV>M>i , ∀i ∈ [n],
Proposition 3 (Covering Number of Single Layer). Denote F the class of function in Eq. (17), i.e., Fk := { f(Uk | Vk,M, t̃k,Ck) : ( t̃k,Ck ) ∈ Ck } .
Then we have: lnN (Fk, , ‖·‖)) = O ( LKinv ln ( 1
√ Kinv
) + ln ( ‖M‖ )) .
Theorem 3 (Generalization Bound for D-SVD). Denote by PM the distribution of Mi and let d := max{m, n̂}. Suppose that ‖Mi‖ ≤ 1, ∀i ∈ [n] and Kinv = Ω ( max{nd,K2svd} ) , then with probability at least 1− δ, the iterate t̃(t) of Algorithm 3 holds that:
E M∼PM
[ Mn ( t̃(t) )] ≤Mn ( t̃(t) ) + Õ (√ K3svdL
n
) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , Tsvd, where EM∼PM [Mn] is the expected value of Mn under the probability measure PM and Õ(·) hides log factors.
D OMITTED PROOF FOR FIXED D-MINV IN SECTION 2
D.1 PROOF OF LEMMA 1
We first verify that e0 < 1, suppose rank(A) = r. e0 = ∥∥AA† −AX0∥∥ = ∥∥∥∥UU> − 1cUΣ2U> ∥∥∥∥ = ∥∥∥∥Ir − 1cΣ2 ∥∥∥∥ < 1,
where A = UΣV>, Ir ∈ Rr×r is the identity matrix, and the last inequality comes from the fact c > 12‖A‖ 2. Form Eq. (2), we have
AA† −AXk+1 = AA†(I−AXk+1) = AA† ( I−AXk ( L−1∑ l=0 Elk ))
=AA† ( I− (I−Ek) ( L−1∑ l=0 Elk )) = AA† ( I− L−1∑ l=0 Elk + L−1∑ l=0 El+1k ) =AA† ( ELk ) = AA†(I−AXk)L = ( AA† −AXk )L ,
i.e., ∥∥AA† −AXk∥∥ = ∥∥AA† −AXk−1∥∥L = eLk0 . We finish the proof.
D.2 PROOF OF PROPOSITION 1
Note that for any matrix X, the matrix polynomial of X share the same column and row space with X. In general, the iterative equation (2) does not change the column and row space from the begining. We finish the proof by induction. The conclusion is obvious for X0 due to X0 = 1c0 A
>. We assume the proposition holds for Xk, then:
A†AXk+1 = A †AXk ( L−1∑ l=0 Elk ) = Xk ( L−1∑ l=0 Elk ) = Xk+1,
and
Xk+1AA † = Xk ( L−1∑ l=0 Elk ) AA† = Xk ( L−1∑ l=0 (I−AXk)l ) AA†
=XkAA † ( L−1∑ l=0 (I−AXk)l ) = Xk+1,
Hence, we can conclude that: A†AXk = Xk, XkAA
†, ∀k. We finish the proof.
E OMITTED PROOF FOR LEARNABLE D-MINV IN SECTIONS 4 AND C
In the theoretical part, we consider a more general training regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥V{k,i}(AiX{k,i}Ai −Ai)∥∥2F , (18) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters, V{k,i} is a diagonal matrix whose diagonal entries are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v). Note that V{k,i}’s are fixed during training and testing. Introducing V{k,i} has two advantages. One is re-weighting the error term. As we can see from the fixed D-Minv, the error term decays exponentially. In that way, the smallest positive singular value dominate the gradient flow. However, for LD-Minv, we hope that the gradient flow comes from all positive singular values’ loss during training. The random re-weighting strategy is a good choice for numerical calculation when the order of singular values is unknown. Another advantage is that the introduced V{k,i}s, with very high probability (see Claim 1 in Sec. E.2), make the landscape of the training loss locally strongly convex, which is an excellent benefit from the optimization perspective.
Notably, by setting the diagonal entries of V{k,i} as the independent symmetric Bernoulli random variables, the training loss here in Eq. (18) will degenerate into the loss given in Eq. (4). Hence, all the theoretical results in this section proven for Eq. (18) also hold for Eq. (4).
It is worth mentioning that the weight matrices {V{k,i}} here has nothing to do with the learned singular vectors {Vi,k} for D-SVD. Please distinguish them.
E.1 GENERAL ANALYSIS
Before start the proof, we first make some general analysis. It is easy to see that Proposition 1 also holds for learnable D-Minv. Thus, we can observe that:
XkE l k = XkAA †Elk, ∀(k, l). Hence, we obtain an equivalent form of Eq. (3):
Xk+1 = Xk ( L−1∑ l=0 C{k,l}Ẽ l k ) , Ẽk = AA † −AXk, L ≥ 4, (19)
Due to the invariance of the row and column space, we can only concern on the singular values vectors and rewrite Eq. (19) as :
xk+1 = xk ( L−1∑ l=0 C{k,l}e l k ) , ek = 1− a xk, L ≥ 4, (20)
where 1 ∈ Rd is the all-one vector (d is the dimension of the positive singular vector σ+(A)), and:
a = σ+(A), xk = σ+(Xk), ek = σ+ ( Ẽk ) , elk = ek · · · ek︸ ︷︷ ︸
l
; (21)
recall that σ(A) the singular values vectors (no need to
Remark 4. We only utilize Eq. (19), the equivalent form of Eq. (3), in this section for the convenience of theoretical analysis. For implementation, we still use Eq. (3) and do NOT calculate the pseudo inverse A† throughout learning.
Now we continue the derivation:
ek+1 = 1− a xk+1 = 1− a xk ( L−1∑ l=0 C{k,l}e l k )
= 1− (1− ek) ( L−1∑ l=0 C{k,l}e l k ) = 1− ( L−1∑ l=0 C{k,l}e l k − L−1∑ l=0 C{k,l}e l+1 k )
= eLk + (1− Ck,0)1 + ( L−1∑ l=1 ( C{k,l−1} − C{k,l} ) elk ) + ( C{k,L−1} − 1 ) eLk
= eLk + Êkck,
(22)
where e0k = 1, Êk ∈ Rd×(L+1) := [ e0k; e 1 k; · · · ; eLk ] , (23) and
ck ∈ RL+1 := [ 1− Ck,0, C{k,0} − C{k,1}, · · · , C{k,L−2} − C{k,L−1}, C{k,L−1} − 1 ]> . (24)
We define:
`(C,A) := 1
2 K∑ k=1 ‖Vk(AXkA−A)‖2F = 1 2 ∥∥∥∥∥ K∑ k=1 vj ( a2 xk − a )∥∥∥∥∥ 2
2
= 1
2 K∑ j=1 ‖vk a ek‖22,
(25) where C = {C{k,l}, (k, l+1) ∈ [K]× [L]} is the collection of all learnable parameters for D-Minv, Vk is a diagonal matrix whose diagonal entries vk = diag (Vk) are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v), and ‖·‖2 is the vector `2 norm; recall the definitions of a, ek and xk from Eq. (21). Note that vk, ∀k is fixed during training. From Lemma 1, we know that
∥∥AA† −AXk∥∥ decays extremely fast. We can run fixed D-Minv for several times before training learnable D-Minv. In this case, X0 is the output of the K̃-layer fixed D-Minv. Hence, without loss of generality (w.l.o.g.), we suppose the following assumption holds in this section.
Assumption 2 (Well-Bounded E0). Assume ∥∥∥Ẽ0∥∥∥ = ‖e0‖∞ ≤ 12 .
We first provide several propositions of learnable D-Minv. Proposition 4 (Upper Bound for Perturbation). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l + 1) ∈ [K]× [L], then we have: ∥∥∥Ẽk∥∥∥ = ‖ek‖∞ ≤ eLk0 + 178 δ, ∀k ∈ [K]. Proof. We first show that ‖ek‖∞ < 1 2 for all k by induction. By Eq. (22), for k = 0, we know that:
‖e1‖∞ ≤ ‖e0‖ L ∞ + ∥∥∥Ê0c0∥∥∥ ∞ ≤ ‖e0‖L∞ + max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Ê0] i,j ∣∣∣∣ ‖c0‖∞ ≤ ‖e0‖L∞ + 2δ < 12 .
Now, we assume ‖ek‖∞ < 1 2 holds. Then,
‖ek+1‖∞ ≤ ‖ek‖ L ∞+ ∥∥∥Êkck∥∥∥ ∞ ≤ ‖ek‖L∞+ max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Êk] i,j ∣∣∣∣ ‖ck‖∞ ≤ ‖ek‖L∞+2δ < 12 .
Since δ < 18 , we have 2δ < 1 4 . By Lemma 3, we can conclude that:
‖ek‖∞ ≤ e Lk 0 + (1 + )2δ, where 1
16 > → 0 as k →∞.
We finish the proof.
Proposition 5 (Upper Bound First-Order Derivative). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l+ 1) ∈ [K]× [L] and ‖a‖∞ ≤ 1, then we have:
∂`(C,A) ∂C{k,l} ≤ 4 ( 1 3 )l b̄2vd.
Proof. Note that:
∂`(C,A)
∂C{k,l} = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂C{k,l}
〉 .
By Eq. (22), we also have: ∂ek+1 ∂ek = Diag ( LeL−1k + Ê ′ kck ) ,
where Diag(e) is a diagonal matrix with the diagonal entries being e, and: Ê′k ∈ Rd×(L+1) := [ 0; e0k; · · · ;LeL−1k ] ;
here 0 ∈ Rd is the all zero vector. Hence, it holds that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ = ∥∥∥LeL−1k + Ê′kck∥∥∥∞ ≤ LeL−1k + ek(1− ek)2 · δ, (26)
where ek = ‖ek‖∞. By Proposition 4, we know that ek ≤ eL k 0 + 17 8 δ < 1 3 . Thus, we have:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 . (27) It is easy to see:
∂`(C,A)
∂ek = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂ek
〉 .
Hence, we have∥∥∥∥∂`(C,A)∂ek̂ ∥∥∥∥ 2 ≤ K∑ k̂=k (∥∥vk̂ a (vk̂ a ek̂)∥∥2) · 4k−k̂ < 2b̄2v√d. According to Eq. (22), we notice:∥∥∥∥ ∂ek+1∂C{k,l} ∥∥∥∥ 2 = ∥∥el+1k − elk∥∥2 ≤ 2∥∥elk∥∥2.
Combing all things together, we conclude that:∣∣∣∣∂`(C,A)∂C{k,l} ∣∣∣∣ ≤ K∑
k̂=k
4k−k̂ ( 4b̄2v √ d ∥∥elk∥∥2) ≤ 4(13 )l b̄2vd.
We finish the proof.
Proposition 6 (Second Order Approximation). Define a set of coefficient collection around 1: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
For any C1,C2 ∈ C∗, we have:
`(C1,A) = `(C2,A) +
〈 ∂`(C2,A)
∂C ,C1 −C2
〉 +O ( b̄2vdKL ) ‖C1 −C2‖22.
Proof. Note that for any C ∈ C∗, we have: ∂2`(C,A)
∂C{k,l}∂C{k′,l′} =
〈 ∂`(C,A)
∂eK , ∂2eK ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉 = 〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ ∂ 2eK−1 ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂`(C,A)
∂eK , ∂2eK ∂eK−1∂eK−1 ( ∂eK−1 ∂C{k,l} , ∂eK−1 ∂C{k′,l′} )〉 +
〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉
= K∑ k̂=max{k,k′}
〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ · · · ◦ ∂2ek̂+1 ∂ek̂∂ek̂
( ∂ek̂
∂C{k,l} , ∂ek̂ ∂C{k′,l′}
)〉 + ∆(k, k′, l, l′),
where we let ek+1 := `(C,A) with a little abuse of notation, and:
∆(k, k′, l, l′) = 0 if (k, l) = (k′, l′),〈
∂`(C,A) ∂eK , ∂eK∂eK−1 ◦ · · · ◦ ∂(el+1k −e l k) ∂C{k′,l′}
〉 otherwise.
Hence we can get the upper bound for the entry ∂ 2`(C,A)
∂C{k,l}∂C{k′,l′} . From the proof in Proposition 5,
we know that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 , ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 1 ≤ d ( 1 4 )k̂−k+l , and ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 2 ≤ 2 √ d ( 1 4 )k̂−k+l .
Note that: ∂2ek+1 ∂ek∂ek = TDiag | 1. What is the focus of the paper regarding matrix inversion, and what are the proposed approaches?
2. What are the strengths and weaknesses of the paper's ideas and contributions?
3. How does the reviewer assess the clarity and coherence of the paper's presentation?
4. What are the concerns regarding the paper's experiments and their relevance to the research question?
5. Are there any questions regarding the paper's objectives and their relation to the proposed methods? | Review | Review
This paper considers the learnable D-Minv (LD-Minv) procedure as an iterative formula to allow for a differentiable means of approximating matrix inversion. The results also extend to problems with orthogonality constraints by taking a manifold optimization approach.
The paper presents some nice ideas in terms of establishing a problem with learnable parameters for matrix inversion based on the higher-order iterative method. However, it is generally hard to follow, as the authors are unable to clearly state what exactly the contribution of the paper is. For example, they propose LD-Minv as a differentiable method "to perform matrix inverse efficiently." But, in eq.(1) they recall the well-known fixed-point problem (based on the Neumann series) that does exactly that, namely it (approximately) performs matrix inversion. Instead, they consider a parameterized version of the iterative formula (called LD-Minv), which they view as being implemented by a DNN, and for solving which they rely on a properly initialized gradient descent (Algorithm 1). However, while it is claimed that GD on LD-Minv will lead to a set of learned parameters that will somehow help with matrix inversion, there is generally confusion about approximately inverting a single fixed matrix, versus achieving small error over the learnable parameters w.r.t. a set of training data matrices A_i, i = 1,...,n. Understanding this key distinction is critical to seeing why we might even consider LD-Minv (as opposed to D-Minv) in the first place, as currently the paper is not so clear on this point. Note that the reader is told the guiding question of the work is "Does there exist an efficient and differentiable way to perform Minv and SVD?", and so it should then explain how the differentiable problem learned on training data helps us achieve this goal. As for the experiments, which are presented only in the appendix and are done predominantly for synthetic data, we again come to the question of whether the goal is the quickly invert a single matrix (as is seemingly claimed in the abstract), or to find a set of learned parameters that achieve small error on the training set.
Overall, this work could use much work in terms of both clarifying the objectives of the work, as well as explaining why such an approach is even considered in the first place, given the fact that, as already noted by the authors, "fixed D-Minv already converges extremely fast". |
ICLR | Title
Fast and Differentiable Matrix Inverse and Its Extension to SVD
Abstract
Matrix inverse (Minv) and singular value decomposition (SVD) are among the most widely used matrix operations in massive data analysis, machine learning, and statistics. Although well-studied, they still encounter difficulties in practical use due to inefficiency and non-differentiability. In this paper, we aim at solving efficiency and differentiability issues through learning-based methods. First of all, to perform matrix inverse, we provide a differentiable yet efficient way, named LD-Minv, which is a learnable deep neural network (DNN) with each layer being an L-th order matrix polynomial. We show that, with proper initialization, the difference between LD-Minv’s output and exact pseudo-inverse is in the order O ( exp { −L }) , where K is the depth of the LD-Minv. Moreover, by learning from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
N/A
}) , where K is the depth of the LD-Minv. Moreover, by learning
from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
1 INTRODUCTION
Matrix inverse (including matrix pseudo-inverse) and singular value decomposition are fundamental linear algebra operations, ubiquitous in machine learning, statistics, signal processing, and other fields. In general, solving scientific computing or optimization problems often need to perform these two operators, such as matrix (pseudo) inverse for least squares regression, singular value decomposition (SVD) for dimensionality reduction (PCA), the low-rank related problems (Liu et al., 2010; Zhang et al., 2018; Liu et al., 2013), the graph-based issues (Wu et al., 2020a; Von Luxburg, 2007), and even for training deep neural networks (DNNs) with structured layers (Ionescu et al., 2015). Nevertheless, Minv and SVD appear less and less often in the modern machine learning tasks. One reason is inefficiency. Computing the SVD and the Minv can be extremely time-consuming for large-scale problems; however, efficiency is a significant concern in the current big data and deep learning era. Besides, non-differentiability is considered another reason that blocks the use of SVD and Minv. Usually, most prevalent methods for training DNNs are first-order and are based on the backpropagation; however, Minv and SVD are not necessarily continuous functions of the matrix entries (Stewart, 1969). Therefore, derivatives are not always existent. Although Minv and SVD may be backprop-able due to some specific implementation, they are unstable and are essentially nondifferentiable. Thus the gradients cannot pass through them when backpropagating. Considering the above problems, one natural question emerges.
Does there exist an efficient and differentiable way to perform Minv and SVD?
Over the last decade, many sketches-based methods have been developed, e.g., Nelson & Nguyen (2013); Meng & Mahoney (2013); Cohen et al. (2015); Indyk et al. (2019). The main idea of the sketches-based techniques is to use random projections, which is efficiently computable, to reduce the problem size before performing SVD and Minv. However, they do not solve the problem of non-differentiability as smaller-sized SVD and Minv still needs to be computed.
Recently, “differentiable learning-based” methods (D-LbM) have attracted considerable attention (Chen et al., 2018; Indyk et al., 2019; Liu et al., 2019; Wu et al., 2020b; Xie et al., 2019). These methods usually unroll the classical optimization or numerical iterative algorithms and introduce the learnable parameters to obtain a learnable DNN. In general, the iterative algorithms inspired DNNs consist of differentiable operators such as matrix polynomials. Benefiting from training on the data, D-LbMs can execute in much fewer iterations with similar per-iteration cost as original algorithms but obtain much better performance. Many empirical results, e.g., Gregor & LeCun (2010); Yang et al. (2016); Peng et al. (2018), show that a well-trained DNN provided by D-LbM—compared with the original optimization algorithm—can obtain an almost equally good solution using one or even two order-of-magnitude fewer iterations. Based on these observations, we aim to find a learning-based iterative way to differentiate Minv and SVD in this paper.
However, all these D-LbMs suffer from two common problems. One is the convergence guarantee. The forward process of D-LbMs may diverge, even if the original unrolled iterative algorithm has a well-behaved convergence guarantee. In fact, most D-LbM methods have little or no convergence guarantees. To the best of our knowledge, there is no work to reveal the behavior of D-LbM during training. How does the training loss decrease? How big is the gap between the output of D-LbM and that of the original algorithm? All these questions are still open. Another problem is, both for D-LbM and the original algorithm, a limited theory exists when the input data obey a common distribution (e.g., data drawn from a low-dimensional manifold). Essentially, by learning from data, D-LbMs can obtain a problem-dependent parameter bias. It can help D-LbMs get the better performance on a specific data distribution within much fewer computation cost, instead of simply fixing the parameter ahead of time like traditional iterative methods. But there is no mathematical result to describe this phenomenon strictly. Moreover, it is unknown whether the trained D-LbMs generalize well on data from the same distribution.
Remarkably, in this paper we provide a learnable, differentiable and efficient way to perform Minv, named Learnable Differentiable Matrix Inverse (LD-Minv) and solve the above two problems by our proposed LD-Minv. First of all, LD-Minv is a DNN with each layer being an L-th order matrix polynomial; and the coefficient of each power is learnable. We show that LD-Minv converges to the Minv or pseudo-inverse with order L if the coefficients of polynomial are set properly. Namely, the difference between the output of the DNN and the exact pseudo-inverse is in the order O ( exp { −LK }) , where K is the depth of LD-Minv. Secondly, by learning from data, LD-Minv can further improve the and precision on some specific data distribution. Namely, the distance between the output and the exact pseudo-inverse can be arbitrarily small. Specifically, the training loss converges to zero exponentially fast, i.e., gradient descent (GD) finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations, where n is the data size. Finally, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings.
With LD-Minv at hand, as a direct application, we propose a learning-based optimization to solve any convex problems with non-convex orthogonality constraints. Then we use it to differentiate SVD. Note that we also provide a generalization guarantee for our D-SVD. Unlike the previous work on differentiable SVD (Indyk et al., 2019), which is based on the power method and needs an assumption on the gap between singular values to ensure convergence, our method can converge to the solution without any gap assumption. In summary, our main contributions include:
• We propose a differentiable method, LD-Minv, to perform matrix inverse efficiently. LDMinv, inspiring a DNN with each layer being a learnable L-th order matrix polynomial, would be useful for accelerating and differentiating general matrix decompositions. We show that LD-Minv can converge to the matrix pseudo-inverse with order L.
• By learning, LD-Minv can further improve the approximation performance on some underlying data distribution. We prove that GD finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations under mild conditions. Moreover, we also provide the generalization bound for LD-Minv. We reveal that the empirical Rademacher complexity of the loss function class is bounded by Õ(min{d3L/K1/2, √ d3K/n}), where Õ(·) hides
log factors, and K and d are the depth and the width of LD-Minv, respectively.
• As a direct application, we further provide a learning-based general framework to solve the problems with orthogonality constraints. This D-LbM helps us to differentiate SVD. Finally, we also provide a generalization guarantee for our D-SVD.
2 DIFFERENTIABLE MATRIX INVERSE
We introduce the proposed D-Minv in this section. We first present an intuitive idea to show how to approximate the matrix inverse iteratively. Then we generalize the iterative method to obtain our fixed and learnable D-Minv, separately
2.1 FIXED DIFFERENTIABLE MATRIX INVERSE
Given a matrix A ∈ Rd×d, we want to approximate its inverse A−1 by the matrix X ∈ Rd×d, i.e., AX ≈ I, where I ∈ Rd×d is the identity matrix. Ideally, X is the fixed point of the following equation:
X = X(AX) −1 = X ( ∞∑ l=0 (I−AX)l ) , (1)
where the last equality follows from the Neumann series when AX ≈ I and (I−AX)l is the l-order matrix polynomial.
Inspired by the above iterative process, it is obvious that we can obtain the matrix X iteratively. Hence, we consider the following higher-order iterative method, which is an L-th order matrix polynomial in each iteration, where L ≥ 4:
Xk+1 = Xk ( L−1∑ l=0 Elk ) , Ek = I−AXk. (2)
Note that DNNs can easily implement this method: one layer corresponds to one iterative step. Since there are no parameters to learn in Eq. (2), and it only involves the matrix multiplication (thus differentiable), we name it fixed D-Minv 1. One may doubt that the computational cost of each iteration is high in Eq. (2). However, as shown in Lemma 1, higher-order polynomial usually implies a faster convergence rate. For obtaining the same precision with different L, the total computational cost is in the order of O(L/ ln(L)), which is a very slow growth rate w.r.t. L. Although simple, fixed D-Minv converges to the inverse A−1 or A† extremely fast. Lemma 1 (Approximation Speed). The generated sequence {Xk ∈ Rd1×d2} by Eq. (2) converges to the Moore–Penrose inverse A† with order L provided that X0 = 1c0 A >, c0 > 12‖A‖ 2, i.e., we can conclude: ∥∥AA† −AXk∥∥ = eLk0 , in which e0 = ∥∥AA† −AX0∥∥ < 1, where ‖·‖ is the spectral norm.
We can see that fixed D-Minv converges with order L, i.e., a shallow D-Minv can approximate the matrix inverse very well. Moreover, provided with the X0 given in Lemma 1, the column space and the row space of the matrix Xk are exactly correct from the beginning. Proposition 1 (Invariant Column and Row Spaces). With the same setting in Lemma 1, for any k ≥ 0, it holds that:
XkAA † = Xk, A †AXk = Xk.
By Proposition 1, it is easy to see that the zero singular values remain unchanged during computing, which indicates that D-Minv works consistently on the full and rank-deficient square and rectangle matrices. Note that this proposition also holds for the upcoming LD-Minv.
2.2 LEARNABLE DIFFERENTIABLE MATRIX INVERSE
We consider the learnable D-Minv (LD-Minv) with the following iterative formula:
Xk+1 = Xk ( L−1∑ l=0 C{k,l}E l k ) , Ek = I−AXk, L ≥ 4, (3)
1Higher-order method Eq. (2) is already well-known in the applied mathematics community in another equivalent form, see Climent et al. (2001); Amat et al. (2003); Li et al. (2011).
where (k, l+ 1) ∈ [K]× [L], C{k,l} ∈ R are learnable coefficients, and ‖·‖F is the Frobenius norm. LD-Minv is also in the category of LbMs that unroll the conventional numerical iteration methods. As discussed in the introduction, two problems, i.e., no convergence analysis for the training procedure and no generalization guarantee for the trained DNN, block the further theoretical exploration of these LbMs. However, in contrast to the previous works, we obtain favorable theoretical results on both of the two problems for our LD-Minv. First, although fixed D-Minv already converges extremely fast, LD-Minv can still perform much better on the training data under mild conditions. Specifically,
∥∥XK −A†∥∥ converges to zero exponentially fast during training, i.e., GD finds an - error minimum in `2 regression using at mostO(nKL log 1/ ) iterations 2. Second, LD-Minv has a tight generalization bound to guarantee the performance on the unseen matrix when it comes from the same distribution as the training instances. We provide the rigorous theoretical results in Sec. 4.
Discussion 1. One can find that the final iterate XK can be written as matrix polynomial of A, and may argue that approximating the Minv by the polynomial is well-studied. Zero training can be obtained when the polynomial’s order is large enough, and the approximation error controls the generalization error. For the sake of distinction, we make three clarifications. First of all, most traditional results only describe the behavior in the worst case, which holds for any input matrix and may easily fail in some extreme cases. However, LD-Minv focuses on a more realistic situation, i.e., the input matrices obey some common distribution, such as the affinity matrices of Facebook users or the sampled CT and MRI images. How to learn from the data and design a more efficient method is still open, not to mention the theoretical generalization guarantee on specific data distribution. Secondly, the traditional polynomial approximation method is susceptible to the polynomial order and is very easy to over-fitting when the order is over-parameterized, say order nd. Without precise data distribution density, it is impossible to choose the proper order to ensure zero training and guarantee the small generalization error, simultaneously. To solve this, we may need some complex and data-driven regularization strategy. In sharp contrast, LD-Minv is robust to the order, and our theoretical results (i.e., zero training and small generalization error) hold well from underparameterized to over-parameterized cases. Thirdly, the exact polynomial fitting can only imply the existence of zero training solution, while our results describe the convergence behavior (i.e., which solution we will choose) . Obviously, there exists a big gap between the convergence of training and the existence of a solution.
Note that we have considered a much easier matrix polynomial: XK = ∑L l=0 ClA
l. Learning the coefficients {Cl}Ll=1 becomes an easy-to-solve convex regression problem unlike the convex one in Eq. (3). Unfortunately, we failed due to some stability issues, e.g, the coefficients in different powers vary significantly which cause the poor generalization performance.
Discussion 2. Most matrix iterative methods suffer from the ill-condition problem, which usually brings instability and makes the analysis and calculation hard to carry on, e.g., for D-Minv, large condition number may significantly slow the convergence speed (see Lemma 1). However, learningbased method can greatly alleviate this ill-condition problem by learning from data. LD-Minv can beat D-Minv easily when the condition number of the input matrix is large and the learned coefficients for LD-Minv are also valid on the unseen data. See the experimental part for more details.
2.3 TRAINING SETTINGS
We now describe some training settings for D-Minv. Denote by {Ai}ni=1 the training set consisting of n samples. Let X{k,i} ∈ Rd1×d2 be the output of the k-th layer on the i-th training sample Ai ∈ Rd2×d1 . We consider a typical regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥AiX{k,i}Ai −Ai∥∥2F , (4) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters. Note that when X = A†, the properties of the Moore–Penrose inverse indicate the equation AXA = A, which can further indicates the equation XAX = X in our invariant space case, see Proposition 1. Therefore, we only use the difference term (AXA−A) in the training loss.
2Please distinguish the two “convergence” here. One describes Xk approaching A† w.r.t. k and L, and the other refers to the behavior of LD-Minv’s loss w.r.t. training iteration.
Algorithm 1 Gradient descent (GD) with proper initialization for LD-Minv Input: Training data {Ai}ni=1, number of iterations T , step size η.
1: Generate each C{k,l} ∈ C such that |C{k,l} − 1| = O(K−α), where α > 14 . 2: Set X{0,i} = 1c0 A > i or X̃i, where c0 > 1 2‖Ai‖
2, X̃i is the output of a fixed D-Minv. 3: for t = 1, · · · , T do 4: Update C(t+1) = C(t) − η · D
[C=C(t)] Ln.
5: end for Output: C(0), . . . ,C(T ).
Thanks to the differentiability of LD-Minv, we adopt GD to minimize the training loss to obtain the proper coefficients C. We present the initialization strategy and training process in Algorithm 1, where the first derivative of differentiable function Ln(C) : RK×L → R at C(t) is denoted by D[C=C(t)]Ln(·) := ( ∂Ln(C(t)) / ∂C ) ∈ RK×L. When K is large, one may notice that our
coefficient is close to the oracle value 1, which may provide a relatively good initial loss L ( C(0) ) . Notably, this does NOT mean that we will treat the learning and initialization as a perturbation of the fixed D-Minv and bound the loss by the good-enough fixed D-Minv’s performance plus the perturbation error. On the contrary, as we will show in theoretical results part, learnability indeed matters, and LD-Minv can obtain better performance than the fixed one by training. The real purpose of this initialization is to take advantage of D-Minv’s local Lipschitz continuity near C = 1, where 1 is the all one vector. The continuity reduces the complexity of optimization and will benefit the generalization of D-Minv.
3 LEARNING-BASED OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS
Considering the following general orthogonality constrained optimization problem:
min U
f(U), s.t. U>U = I, (5)
where the objective function f(U) : Rm×r → R is differentiable and convex. The usual way to solve Eq. (5) is to perform the manifold GD on the Stiefel manifold, which evolves along the manifold geodesics. Specifically, manifold GD first updates the variable in the manifold tangent space along the objective function’s projected gradient. Then, map the updated variable in the tangent space to a feasible point on the geodesic, and repeat these two steps until convergence (Edelman et al., 1998). Usually, the mapping step is non-differentiable and inefficient. Fortunately, the work (Wen & Yin, 2013; Ren & Lin, 2013) develops a technique to solve the optimization problem with orthogonal constrains approximately, which only involves matrix multiplication and inverse.
We let U ∈ Rm×r, where r is the Stiefel manifold’s dimension. Denote by G ∈ Rm×r the gradient of the objective function f(U) in Eq. (5) w.r.t. U at Uk, then the projection of G in the tangent space of the Stiefel manifold at Uk is PUk3, where P = GU>k −UkG> and P ∈ Rm×m. Instead of parameterizing the geodesic of the Stiefel manifold along direction P using the exponential function, we generate the feasible points to update Uk by the following Cayley transform (Wen & Yin, 2013):
Uk+1 = U(t) = C(t)Uk, where C(t) = ( I + t
2 P
)−1( I− t
2 P
) , (6)
where I is the identity matrix and t ∈ R is the step size used for updating the current Uk. In other words, U(t) is a re-parameterized local geodesic w.r.t. t on the Stiefel manifold. One can easily verify that U(t) has the following properties, given U>k Uk = I: (1) ddtU(0) = −PUk, (2) U(t) is smooth in t, (3) U(0) = Uk, (4) U (t) > U (t) = I, ∀t ∈ R. It is evident that, if t is in a proper range, Uk+1 can lead to a lower objective function value than U(0) = Uk on the Stiefel manifold. Besides, computing Uk+1 only involves the matrix multiplication and matrix inverse, which can be easily performed by our LD-Minv in Eq. (3. Therefore, we let C(t) = LD-Minv (I + tP/2) in Eq. (6). Then we can obtain a learning-based optimization method for the problem with orthogonality constraints in Eq. (5).
3We choose the canonical metric on the tangent space as the equipped Riemannian metric.
3.1 APPLICATION: DIFFERENTIABLE SVD BY LD-MINV
In this part, we show how to utilize the previous general learning-based optimization framework to differentiate SVD. Given an arbitrary matrix M ∈ Rm×n̂ and its skinny SVD M = ŨS̃Ṽ>, the Von Neumann’s trace inequality, 〈M,X〉 ≤ ∑ i σi(M)σi(X), implies that {Ũ ∈ Rm×r, Ṽ ∈ Rn̂×r} is an optimal solution of the following optimization problem:
min U,V f(U,V) :=
( 1
2 ‖Dc (Λ)‖2F −
〈 M,UV> 〉) , s.t. U>U = V>V = I, (7)
where Λ := U>MV, Diag(·) returns the collection of the diagonal entries of the input matrix, and Dc(Λ) := (Id−Diag)(Λ) := Λ−Diag (Λ) for any input matrix Λ. Note that solving the problem in Eq. (7) is equivalent to performing SVD. Let {M,U0,V0} be the inputs of our Differentiable SVD (D-SVD). We adopt the Cayley transform in Eq. (6) to solve the problem in Eq. (7), and replace the matrix inverse in it by our proposed LD-Minv in Eq. (3). W.l.o.g, in the following, we only focus on the updating strategy of U due to the similarity between U and V. We first compute the gradient of the objective function f(U,V) w.r.t. U at {Uk,Vk}, which we shall denote by G = MVk ( Dc ( V>k M >Uk ) − I ) . Then we find a geodesic curve U(t) along the gradient on the Stiefel manifold for updating Uk. Referring to Eq. (6), the curve is U(t) = ( I + t2P )−1( I− t2P ) Uk, where P = GU>k −UkG>. Given the step size t, we can use
LD-Minv to perform the matrix inverse on the factor ( I + t2P ) . In summery, D-SVD consists of two steps: (1) find a proper t; (2) update {Uk,Vk} by the Cayley transform. For finding a proper step size t, we consider the following problem:
t∗U = argmin 0≤t≤ε f(t) := f(U(t),Vk), (8)
where ε is a given parameter to ensure a small enough magnitude of t∗U . Notice that if t is small enough to hold ∥∥ t 2P ∥∥ < 1, then we have (I + t2P)−1 = I + ∑∞l=1 (− t2P)l, which implies that
U(t) = ( I + 2 ∑∞ l=1 ( − t2P )l) Uk. Considering that t∗ in Eq. (8) is small, we can approximate
f(t) via its second order Taylor expansion at t = 0:
f(t) = f(0) + f ′(0) · t+ 1 2 f ′′(0) · t2 +O
( t2 ) , (9)
where f ′(0) and f ′′(0) are the first and the second order derivatives of f(t) evaluated at 0, respectively. These two derivatives have closed form and can be computed efficiently. Consequently, we can obtain an approximated optimal solution t∗ via:
t∗U = min { ε, t̃ } , where ε < 2/‖P‖, and t̃ = −f ′(0)/f ′′(0), (10)
provided that:{ f ′(0) = 〈 Dc ( U>k MVk ) ,Dc ( U>k PMVk )〉 + 〈MVk,PUk〉 ,
f ′′(0) = ∥∥Dc (U>k PMVk)∥∥2F + 〈Dc (U>k MVk),Dc (U>k P2MVk)〉− 〈MVk,P2Uk〉 .
(11)
Then we can update U by U(t∗U ), i.e., Uk+1 = U(t ∗ U ). Thanks to the cyclic property of trace operator, Vk+1 shares a similar update strategy with Uk+1.
Provided the input {M,Uk,Vk}, the complete iterative steps for our D-SVD are as follows4: GU = MVk ( Dc ( V>k M >Uk ) − I ) , PU = GUU > k −UkG>U , t∗U = min { 2 ‖PU‖ ,− f ′(U(t),Vk) f ′′(U(t),Vk) } + t̃Uk , Uk+1 = Cayley (t ∗ U ,CUk ,PU ,Uk), GV = M >Uk+1 ( Dc ( U>k+1MVk ) − I ) , PV = GV V > k −VkG>V ,
t∗V = min
{ 2
‖PV ‖ ,− f
′(Uk+1,V(t)) f ′′(Uk+1,V(t))
} + t̃Vk , Vk+1 = Cayley (t ∗ V ,CVk ,PV ,Vk),
(12)
4For convenience and clearing writing, we omit the superscript in the updating rules.
Algorithm 2 Forward Propagation for D-SVD Input: Training data {Mi,Ui,0,Vi,0}ni=1, depth Ksvd of D-SVD, number of iterations Tinv, step
size ηinv and D-SVD’s current parameters t̃. 1: for k = 0, . . . ,Ksvd do 2: Calculate PU,i and t∗U,i with {Mi,Ui,k,Vi,k}ni=1 by Eq. (12). 3: Train LD-Minv with parameters CUk by Algorithm 1 with the data {Ai}ni=1, number of iterations Tinv and step size ηinv, where Ai = (I + t∗U,iPU,i/2). 4: Repeat Steps 2 and 3 by switching the roles of U and V with the input {Ui,k+1,Vi,k}ni=1. 5: end for
Output: {CUk ,CVk} Ksvd k=1 and {Ui,k+1,Vi,k} n,Ksvd i=1,k=1.
Algorithm 3 Joint Training for D-SVD and LD-Minv Input: Training data {Mi}ni=1, number of iterations Tinv and Tsvd, step sizes ηsvd and ηinv.
1: Generate {Ui,0,Vi,0}ni=1 randomly from the r-dimensional Stiefel manifold. Set t̃(0) = 0. 2: for t = 1, · · · , Tsvd do 3: Forward propagate by Algorithm 2. 4: Update t̃(t+1) = t̃(t) − ηsvd · D[t̃=t̃(t)]Mn, whereMn := 1 nKsvd ∑n i=1 ∑Ksvd k=1 f(Ui,kVi,k).
5: end for Output: {Ui,k+1,Vi,k}n,Ksvdi=1,k=1.
where Cayley (t,C,P,U) := LD-Minv (I + tP/2,C)(I− tP/2)U,
t̃ := {(t̃Uk , t̃Vk) ∈ R2} Ksvd k=1 is the collection of learnable parameters for D-SVD, and LD-Minv(·,C) is a LD-Minv module with parameters C. Note that a DNN can easily implement our D-SVD. Each layer implements the steps in Eq. (12), which are the specific iterative procedures of the previous general learning-based optimization with orthogonality constraints. We adopt different LD-Minvs for different layers, and provide the training process for D-SVD in Algorithms 2 and 3. By introducing LD-Minv into the Cayley transform, we bypass the non-differentiable exact Minv, and solve the problem in Eq. (7) (i.e., perform SVD) in a differentiable and learning-based way.
4 MAIN RESULTS
In this section, we provide the main theoretical results of D-Minv, including the linear convergence of GD during training and the generalization performance. All the detailed proofs of these theorems are provided in the supplementary material. The theoretical results for D-SVD are presented in the supplementary material due to limited space.
4.1 CONVERGENCE RATE OF GD
We consider the general LD-Minv and large training data size n in this section. We first make the following assumption on the training data.
Assumption 1 (Bounded Singular Values). Given the training matrices { Ai ∈ Rd1×d2 }n i=1
, we assume the training matrices’ positive singular value vectors are d-dimensional, where d ≤ min{d1, d2}. Assume these positive singular values have lower and upper bounds, i.e.,
‖xi‖∞ ≤ 1 and min j∈[d] [xi]j ≥ b̄a > 0, where xi = σ+(Ai) ∈ R d, ∀i ∈ [n],
and σ+(·) extract the positive singular values.
In general, the boundedness assumption for the singular values the is weak and easy to satisfy. See the discussion in the supplementary material (Section C.1.2).
With this assumption, we can obtain the convergence rate of our Algorithm 1.
Theorem 1 (Convergence Rate). Suppose Assumption 1 holds and let d = min{d1, d2}. For any t, > 0, we let K = Ω ( db̄2a ) , then with probability at least 1− exp ( −O ( K/b̄2a )) , GD in Algorithm
1 with step size η = Θ(1/dL) can find a collection of coefficients such that:
Ln(C(T )) < , for T = Θ ( dLKnb̄−2a ln (1/ ) ) ,
where b̄a is the lower bound for the positive singular values of training data Ai.
This is known as the linear convergence rate because drops exponentially fast in T . Recall that K and L are from the definition of our LD-Minv.
Remark 1. Different from the traditional theoretical results of DNNs, the randomness in our results mainly comes from a re-weighting strategy (see Sec. E) rather than initialization. In general, our results hold for any initialization strategy as long as the condition in Algorithm 1 is met. Notably, our results do not require the depth or the width to be in the polynomial order of data size n, which is a prevalent over-parameterization assumption for the current deep learning theories.
Remark 2. To present our convergence result most simply, we choose to focus on GD method with fixed step size mainly. It shall be easy to extend our results to more general settings, such as stochastic GD and dynamic step size.
4.2 GENERALIZATION
We characterize the generalization performance of LD-Minv trained by GD in this theorem.
Theorem 2 (Generalization Bound). Denote by PA the distribution of Ai in Assumption 1. Suppose that α ≥ 1 andK2α−1/2 = Ω( √ n), then with probability at least 1−δ, the iterate C(t) of Algorithm
1 holds that:
E A∼PA
[ Ln ( C(t) )] ≤ Ln ( C(t) ) + Õ ( min { b̄2wd 3L
Kα/2 ,
√ d3K
n
}) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , T , where EA∼PA [Ln] is the expected value of Ln under the probability measure PA and Õ(·) hides log factors.
We adopt two ways to upper bound the empirical Rademacher complexity, which corresponds to the cases K < n and K > n, respectively. The final bound is the minimum of them (middle term in the above upper bound).
Remark 3. The second term in our bound distinguishes our result from most previous work (AllenZhu et al., 2019; Arora et al., 2019; Yehudai & Shamir, 2019; Cao & Gu, 2019; Chen et al., 2019; Xie et al., 2020) on the generalization bounds of over-parameterized neural networks. Specifically, most over-parameterized work mainly focus on establishing the bound that does not explode when the network width goes to infinity. However, their bounds are exponential in the depth, which may not hold when the depth is large. In contrast, our result covers a wider range of depth K: it covers the case of both K < n and K > n. One may wonder whether our bound will explore when K approaches infinity. The answer is no. We can observe a “double descent” trend to certain extent: the generalization error bound first increases with the network depth K when K < n, then it starts to decrease when K becomes larger even for K n.
5 CONCLUSION
We provide a differentiable and learnable way, named LD-Minv, to perform the matrix inverse by using a L-th order matrix polynomials. Then, we offer some theoretical guarantees about the convergence during training and generalization ability in both under-parameterization and overparameterization setting for LD-Minv. Remarkably, we are the first to provide the strict analysis for the training process of the differentiable learning-based methods. As an application of LD-Minv, we propose a learning-based optimization method to solve the problems with non-convex orthogonality constraints, and then utilize it to differentiate SVD. A generalization guarantee for D-SVD is also provided.
APPENDIX
A EXPERIMENTAL RESULTS
A.1 EFFECTIVENESS VALIDATION
We conduct experiments on synthetic data to validate the effectiveness of our proposed LD-Minv and D-SVD. We run our all experiments for 10 times and report the average results.
A.1.1 EXPERIMENTS RESULTS FOR LD-MINV
Settings. For LD-Minv, we adopt the square and rectangle matrix to test its performance. It should be mentioned that our LD-Minv is adaptive to the shape of the input matrix. In general, the square matrix can better test the performance of the algorithm, since the minimum singular value is near 0 in this case (Vershynin, 2010). And a large condition number of the input may make most numerical matrix calculation methods unstable. For the square matrix A ∈ Rd×d, its elements are sampled from i.i.d. Gaussian, namelyAi,j ∼ N (0, 1/ √ d). For the rectangle matrix A ∈ Rm×n̂, its elements
are also sampled from Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We adopt the SGD to optimize the parameters. Note that our theoretical results in Sec. 4 hold for GD, but they can easily extend to the SGD case. The learning rate is set to 1e-4, and it is reduced by half every 300 iterations. We also fix the batch size as 100, while the number of iteration is set to 1200. Note that, in the experiments, the adopted exact Pseudo matrix inverse (Minv) is implemented by Pytorch.
We adopt the following test loss to measure the performace:
Test Loss := 1
n n∑ i=1 ∥∥AiX{Kinv,i}Ai −Ai∥∥2F , where X{Kinv,i} is the final output of D-Minv or LD-Minv. Here n = 100 is the size of test data. With d = 100, we first test the influence of L and K, which correspond to the order of matrix polynomial and the number of layers in our LD-Minv, respectively. Due to high precision of LDMinv and D-Minv, for visualization, we take a base-10 logarithm on loss in this experiment.
Results. From Fig. 1(a), we can see that the number of layer K plays a more critical role for LDMinv than the order L of polynomial. When settingK = 7, the log of the test error drops from−1.0 for L = 4 to −4.6 for L = 9. However, the log of the test error drops from 0.32 for K = 2 to −4.6 for K = 7, when fixing L = 9. Hence, for efficiency, we suggest utilizing large K and small L for LD-Minv.
Compared with D-Minv, according to Fig. 1(b) and Fig. 1(c), LD-Minv can obtain a better test performance even for largeK andL, although D-Minv converges to Minv extremely fast in this case. Our result demonstrates that learning can help LD-Minv obtain better performance on a problemdependent data distribution rather than merely fixing the coefficients ahead for time. We notice that LD-Minv and D-Minv obtain a similar precision when K > 15, and the reason is that the approximation error for D-Minv is in the order of 1e− 8 in this case, which lifts very limited space for the improvement by learning.
Fig. 1(b) further verifies the importance of K. As shown in our theoretical results (Theorem 1 and Theorem 2); large K indicates smaller generalization bound and higher probability to obtain the linear convergence rate for Algorithm 1.
Another significant advantage of our proposed method is efficiency. D-Minv and LD-Minv can inverse the input matrix with high precision meanwhile with the low computational cost. The benefits of D-Minv and LD-Minv are apparent when the matrix dimension is large. As shown in Table 1, even for large K and L, the inference time of LD-Minv is less than a tenth of the time of exact matrix inverse when the size is 1000 × 1500. It is well-known that the Minv and matrix multiplication (MatMul) share the same computation complexity theoretically, while their speeds vary a lot numerically. The computation of our method is mostly spent on MatMul, which can be easily accelerated by GPU and is much cheaper than Minv. MatMul and Minv only share the same complexity asymptotically. The constants in the orderO(·) vary significantly, especially when matrix dimension d ∞. Moreover, the general MatMul algorithm with a complexity of O ( d3 )
isn’t optimal, and there exist algorithms that own a complexity of only O ( d2.3 ) .
To further investigate the importance of the learning procedure, we conduct the experiments on the matrices with different condition numbers. Given the condition number κ ≥ 1, we generate the data matrix by Ai = UiSiVi, where Ui and Vi are randomly sampled from a 100-dimensional
Stiefel manifold. Here Si is a diagonal matrix whose diagonal entries are i.i.d. and obey a uniform distribution U(1/κ, 1) with distribution boundaries being [1/κ, 1]. We let K = 4, L = 10 and the batch size be 100. We show the results in Table 2. Not surprisingly, D-Minv suffers from ill-condition problems, e.g., the large condition number deteriorates the performance of D-Minv. However, LD-Minv is not sensitive to the condition number, and it can beat D-Minv easily when the condition number is large. Note that all the results are reported on the unseen data.
A.1.2 EXPERIMENTAL RESULTS FOR D-SVD
Settings. For D-SVD, we adopt the rectangle matrix to run the experiments. Similarly, our algorithms also work for the square matrix. For the rectangle matrix A ∈ Rm×n̂, its elements are also sampled from i.i.d. Gaussian, namely Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We let m = 50 and n̂ = 100, and utilize Algorithm 3 to solve our learning problem5. Similarly, we also perform the batch training with batch size 30. The learning rate is set to 1e − 2 for D-SVD and 1e − 4 for LD-Minv contained in D-SVD. We do not decay the learning rate during training, and let the number of iterations for D-SVD and LD-Minv be 200 and 100, respectively. For LD-Minv in each layer, we let L = 4 and K = 6.
Same as Sec. 3, we adopt the following loss to train our D-SVD:
Training Loss := 1
nKsvd n∑ i=1 Ksvd∑ k=1 ( 1 2 ‖Λi,k −Diag (Λi,k)‖2F − 〈 Mi,Ui,kV > i,k 〉) ,
where Ui,k and Vi,k are the k-th layer’s output of D-SVD, and Λi,k := U>i,kMiVi,k. The reason that we introduce the regularization term ∥∥∥U>i,kMiVi,k −Diag (U>i,kMiVi,k)∥∥∥2
F is the nonuniqueness. In general, any pair (U,V) ∈ { (ŨR, ṼR) : M = ŨS̃Ṽ>, R>R = RR> = I } can
maximize the function (〈 M,UV> 〉) . However, in order to obtain the exact singular vectors of M,
the matrix ( U>i,kMiVi,k ) should have zero off-diagonal entries. Thus, the regularization term is necessary for the training of D-SVD.
We use the following test loss to report the results:
Test Loss := 1
n n∑ i=1 ( 1 2 ‖Λi,Ksvd −Diag (Λi,Ksvd)‖ 2 F − 〈 Mi,Ui,KsvdV > i,Ksvd 〉) ,
where Ui,Ksvd and Vi,Ksvd is the final layer’s output of D-SVD, and Λi,Ksvd := U > i,Ksvd MiVi,Ksvd . Note that we also utilize the following quantity to measure the test performance.
SVD MSE := 1
n n∑ i=1 ∥∥Sort (Diag (U>i,KsvdMiVi,Ksvd))− SVD (Mi)∥∥2F , where Sort(·) is the operator that sort the input elements in the descending order, Diag(·) returns the collection of the entries at the diagonal of the input matrix, and SVD(·) gives out the singular values.
For convenience, we add the mean of the sum of SVD(Mi) to the loss in Table 3 to make the loss non-negative. Although increasingKsvd also can improve the performance in the non-learning case,
5We replace the GD part by SGD for the generality of the experiment.
learning can strengthen the role of Ksvd. Without training, the test loss for Ksvd = 60 is one eighth of the loss for Ksvd = 1; in sharp contrast, the reduction can reach one thousandth after training. Due to the introduced regularizer for the training loss, we can obtain a small SVD MSE rather than the small difference between the sum of singular values.
During the training of LD-SVD, we observe that our results are robust to the setting of K and L. In general, the forward process can converge even with very inexact matrix inverse (Li et al., 2019). Moreover, our introduced learnable step size { t̃Uk , t̃Vk } further provides the freedom to adjust the update direction for Uk and Vk. Thus, for efficiency, a relative small K and L is enough for each layer’s LD-Minv.
A.1.3 PLUG-AND-PLAY: DIFFERENTIABLE NON-BLIND DECONVOLUTION
Non-blind deconvolution (NbD) aims to restore the latent image z from corrupted observation y with known blur kernel b. In this experiment, we consider a well-known sparse coding formulation:
y = b⊗ z + n = BW>x + n,
where ⊗ is the convolution operator, x and n are the sparse code and unknown noises, respectively. B is the matrix form of kernel b, and W> is the inverse of the wavelet transform W (i.e., x = Wz and z = W>x). We utilize the following optimization to perform the non-blind deconvolution:
min x,z
g(x, z) := 1
2 ‖y −Bz‖22 + λ‖x‖1, s.t. z = W >x,
where λ > 0 is the balance parameter. A usual way to solve this problem is linearized ADMM (L-ADMM) (Lin et al., 2011), read as: ak = zk − αk ( λk + β ( zk −W>xk )) , zk+1 = ( αkBB > + I )−1( ak + αkB >y ) , bk = xk − γkW> ( λk + β ( zk+1 −W>xk )) , xk+1 = sgn (bk) max ( bk − γk λ , 0 ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(13) where αk > 0, γk > 0 and β > 0 are penalty parameters, is the Hadamard product, and λ is the Lagrange multiplier.
Differentiable NbD. Inspired by the Eq. (13), we can easily obtain the learning-based non-blind deconvolution method: ak = zk − αk Ĩk ( λk + β ( zk −W>xk )) , Tk = LD-Minv ( αkBB > + I,Ck ) zk+1 = Tk ( ak + αkB >y ) , bk = xk − γkW̃k ( λk + β ( zk+1 −W>xk )) , xk+1 = ReLU ( bk − γk λ ) − ReLU ( −bk − γk λ ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(14) where SNbD := {Ĩk,W̃k, αk, γk, β}KNbDk=1 is the collection of all learnable parameters, Ĩk and W̃k are introduced learnable matrices (in the implementation, we choose them as the convolution layer
Algorithm 4 Joint Training for D-NbD and LD-Minv Input: Training data {yi}ni=1, number of iterations TNbD and Tinv, step sizes ηNbD and ηinv.
1: Set xi,0 = yi, zi,0 = W>xi,0 and λ0 = 0. 2: for t = 1, · · · , Tsvd do 3: Fix the learnable parameters S(t)NbD and train each layer’s LD-Minvs by Algorithm 1. 4: Update S(t+1)NbD = S
(t) NbD − ηNbD · D[S(t)NbD=SNbD] Qn, where Qn is presented in Eq.(15). 5: end for
Output: {zi,k}n,KNbDi=1,k=1.
with compatible size), Ck is the learnable parameters for each layer’s LD-Minv, and ReLU(a) = max{a, 0} is the Rectified Linear Unit (ReLU) function. Note that
sgn (b) max (b− γ, 0) = ReLU (b− γ)− ReLU (−b− γ).
Eq. (14) can be implemented by a DNN. Given the training images {yi}ni=1, we adopt the following loss to train our D-NbD:
Qn(SNbD) := 1
nKNbD n∑ i=1 KNbD∑ k=1 ( g(xi,k,W >xi,k) ) , (15)
where xi,k is the k-th layer’s output of D-NbD. The training process for D-NbD is given in Algorithm 4.
Settings. The training and test images come from Schmidt & Roth (2014), in which 300 (random selected) have been used for training and the others are used for test. We add different levels of the Gaussian noise to generate our corrupted observations {yi}ni=1. We set the patch size as 160× 160, and let the batch size be 15. Finally, Adam is adopted and executed 1000 epochs for learning rate ranging from 1e-4 to 1e-6 (it is reduced by half every 140 iterations). The observations {yi} are corrupted by blur kernel with size ranging from 17× 17 to 57× 57 and the levels of Gaussian noise varying from 1% to 3%.
Results. From Table 4, we find that fL-ADMM is much more efficient than aL-ADMM. fL-ADMM does not need to perform the inverse balbala since αk, γk are fixed, which however is also why fLADMM performs poorly. It can not find the universal αk, γk for all test images. aL-ADMM can perform better with adaptive step size, but it needs exact Minv in each iteration. Compared to them, D-NbD outperforms L-ADMM by a large margin in terms of speed and performance. Note that, for one iteration, the computation complexity of fL-ADMM is much lower than the other three since it only involves the MatMul. However, it consumes more than 600 iterations to obtain a relatively good solution. In fact, just comparing the Minv time in one iteration for aL-ADMM and D-NbD, our methods’ time is only almost half of that of exact Minv in PyTorch. Fortunately, the learnability of DNbD helps us obtain a better or equally good solution using one order-of-magnitude fewer iterations. Hence, the total time can only be one-tenth of that of aL-ADMM. Moreover, the learnability of LDMinv further introduces more flexibility, which allows D-NbD(LD-Minv) to achieve better results within fewer layers. Thus, D-NbD(LD-Minv) is more efficient due to less calculation.
A.2 DISCUSSION
Not only D-NbD, based on the LD-Minv and D-SVD, but there are also rich potential applications, including image recovery or denoising, numerical algorithm acceleration, blind deconvolution, sparse coding, subspace clustering, etc. All these compelling applications may be beyond this paper’s scope, and we leave them as future work. Moreover, due to the differentiability, D-SVD can help us design more abundant singular-value-related training loss for DNN, which is impossible in the previous non-differentiable situation.
B NOTATION
Positive definite matrix A is denoted by A 0. Transpose of the matrix A ∈ Rm×n is A>. We denote by A−1 and A† the matrix inverse and Moore–Penrose inverse, respectively. The Euclidean inner product between two matrices A ∈ Rm×n and B ∈ Rm×n is defined as 〈A,B〉 := Tr ( A>B ) , where Tr(·) is the trace of a matrix. κ(A) := ‖A†‖‖A‖ is the generalized condition number of a matrix, where ‖·‖ is the spectral norm. We denote by σ(A) the singular values vectors (no need to be ordered) of the matrix A. Similarly, we denote by σ+(A) the positive singular values vectors.
Let ‖·‖F be Frobenius norm. Given a differentiable function the first and second derivative of F at Y are denoted by: linear operator D[X=Y]F(·) : Rm → Rn := ( ∂Fi(Y) ∂Xj ) (·) and quadratic
operator D2[X=Y]F(·, ·) : R m × Rm → Rn := ( ∂2Fi(Y) ∂Xj∂Xk ) (·, ·), respectively, where Xi is the i-th entry of X. We also denote by F−1 the inverse of the function F and let the composition map f ◦ g(x) := f(g(x)). is the Hadamard product and [K] = {1, 2, · · · ,K}.
C ADDITIONAL THEORETICAL RESULTS
In this section, we provide the additional theoretical results of D-Minv and D-SVD. We start by a warm-up case for D-Minv.
C.1 RESULTS FOR LD-MINV
C.1.1 WARM-UP: ONE PARAMETER FOR ONE LAYER
As a warm-up case, we consider only to learn one coefficient for each layer, as a special case of learnable D-Minv:
Xk+1 = Xk ( L−1∑ l=0 Elk + CkE L k ) , Ek = I−AXk, (16)
where Ck ∈ R are the learnable coefficients. By setting X0 = 1c0 A >, where c0 > ‖A‖2, and let Ck = 0 for all k, by Lemma 1, we know that ∥∥AA† −AXk∥∥ = eLk0 . One important question is if we let the coefficient of ELk to be learnable, by training, will D-Minv obtain a faster convergence rate?
In general, previous works show that learning-based methods share the same convergence order with their fixed version but with a better statistical constant. Interestingly, beyond the constant, we found that D-Minv can improve the order of convergence by learning. Lemma 2 (Learnability Matters). Assume these is only one training data A. Provided that X0 = 1 c0 A>, where c0 > ‖A‖2, then we have:
ek+1 = (1− Ck) · eLk + Ck · eL+1k , where ek = ∥∥AA† −AXk∥∥, and k ∈ [K], L ≥ 4.
Moreover, the gradient of L w.r.t. Ck is negative, i.e., D[Ck]L < 0 for all Ck ∈ [0, 1 + ], where > 0 depends on ek−1. Additionally, D[Ck]L ≈ 0 for k ∈ [K − 1] when Ck ∈ [1− , 1 + ].
Starting from 0, according to the gradient’s negativeness, the coefficients Ck for k ∈ [K − 1] will converge to the range [1 − , 1 + ] when adopting a proper step size for GD. In this case, i.e., Ck
are around 1 for all k, the sequence {Xk} converges to the Moore–Penrose inverse in L̃-th order, where L < L̃ ≤ L+ 1. In other words, by learning from one data, D-Minv finds a way to improve the order of convergence for any input. Remarkably, Ck = 1 for all k is not a global or even local minimum for the training loss L. For the learnable coefficient CK at the last layer, satisfying its first order condition usually implies obtaining the exact solution, i.e., eK+1 = 0. In summary, not only improve the convergence speed, learnable D-Minv makes the exact fitting come true.
C.1.2 DISCUSSION FOR ASSUMPTION 1
In general, the bounded assumption for the singular values the is weak and easy to satisfy. For example, singular values of the matrices with i.i.d. zero mean and unit variance entries asymptotically obey the Quadrant Law (Dozier & Silverstein, 2007; Shen, 2001), i.e., σ ∼ 1π √ 4− σ2, σ ∈ [0, 2], which implies the singular values have an upper bound. On the other hand, a well-known fact from the random matrix theory is a zero mean and unit variance random matrix is non-singular with high probability. Even for the square matrix A, with probability at least 1 − δ, we still we have σmin(A) ≥ δd−1/2 (Rudelson & Vershynin, 2008), where σmin(A) is the smallest singular value of the matrix A.
C.1.3 LIPSCHITZ SMOOTHNESS BEFORE GENERALIZATION
We would also like to remark that our generalization result can easily be extended to the stochastic case. The proof is the same as the Lemma 4.3 in Ji & Telgarsky (2019) or Theorem 3.3 in Cao & Gu (2019).
In general, smoothness plays an important role in the generalization analysis. The covering number of a smooth function usually is small. We start by showing the local smoothness of learnable DMinv.
Proposition 2 (Local Lipschitz smoothness). Define a set of coefficients collection: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
Let PA = AA†. Given any coefficients collection C ∈ C, we denote by Gk(·) the map from PAEk to PAEk+1 in (3), i.e., Gk(E) := PA−PA(I−E) (∑L−1 l=0 C{k,l}E l ) , and letHk(·) := R◦GK−1◦
· · · ◦ Gk(·), whereR(E) := ∑K k̂=k ∥∥Wk̂Ek̂A∥∥2F . Suppose ‖A‖2 ≤ 1 and ∀k ‖Wk‖ = O(1), then we have: ∣∣∣Hk(Ê)−Hk(Ẽ)∣∣∣ ≤ 2∥∥∥PAÊ−PAẼ∥∥∥
∗ , ∀k ∈ [K],
where Ê and Ẽ are with the same size and same norm upper bound as Ek, d is the dimension of positive singular value vector σ+(A), and ‖·‖∗ is the matrix nuclear norm.
In general, the composition of two Lipschitz continuous functions leads to a worse Lipschitz constant. However, our LD-Minv shares a consistent constant for all layers. This is important since the layer-wisely local smoothness usually implies a small covering number of the whole DMinv (Bartlett et al., 2017; Wei & Ma, 2019), which results in a tight generalization bound.
C.2 RESULTS FOR D-SVD
We characterize the generalization performance of D-SVD trained by GD in this theorem. First of all, we present several definitions. Let
Mn(t̃,C) := 1
nKsvd n∑ i=1 Ksvd∑ k=1 〈 Mi,Ui,kV > i,k 〉 ,
where (t̃,C) are the collection of the parameters of D-SVD and the LD-Minvs in each layer. Denote by f(U | V,M, t̃k,Ck) the operations in Eq. (12) that map Uk to Uk+1, i.e.,
Uk+1 := f(Uk | Vk,M, t̃k,Ck), (17)
where (t̃k,Ck) ∈ R×RLKinv are the learnable parameters of D-SVD and LD-Minv at the k-th layer of D-SVD. We define the coefficients set:
Ck := [ −εu
2 ,+ εu 2
] × { C : ∥∥∥C−C(0)∥∥∥
F = Õ ( 1√ Kinv )} ,
where εu := min
i
1
‖Pi‖ , in which Pi = MiVkU>k −UkV>M>i , ∀i ∈ [n],
Proposition 3 (Covering Number of Single Layer). Denote F the class of function in Eq. (17), i.e., Fk := { f(Uk | Vk,M, t̃k,Ck) : ( t̃k,Ck ) ∈ Ck } .
Then we have: lnN (Fk, , ‖·‖)) = O ( LKinv ln ( 1
√ Kinv
) + ln ( ‖M‖ )) .
Theorem 3 (Generalization Bound for D-SVD). Denote by PM the distribution of Mi and let d := max{m, n̂}. Suppose that ‖Mi‖ ≤ 1, ∀i ∈ [n] and Kinv = Ω ( max{nd,K2svd} ) , then with probability at least 1− δ, the iterate t̃(t) of Algorithm 3 holds that:
E M∼PM
[ Mn ( t̃(t) )] ≤Mn ( t̃(t) ) + Õ (√ K3svdL
n
) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , Tsvd, where EM∼PM [Mn] is the expected value of Mn under the probability measure PM and Õ(·) hides log factors.
D OMITTED PROOF FOR FIXED D-MINV IN SECTION 2
D.1 PROOF OF LEMMA 1
We first verify that e0 < 1, suppose rank(A) = r. e0 = ∥∥AA† −AX0∥∥ = ∥∥∥∥UU> − 1cUΣ2U> ∥∥∥∥ = ∥∥∥∥Ir − 1cΣ2 ∥∥∥∥ < 1,
where A = UΣV>, Ir ∈ Rr×r is the identity matrix, and the last inequality comes from the fact c > 12‖A‖ 2. Form Eq. (2), we have
AA† −AXk+1 = AA†(I−AXk+1) = AA† ( I−AXk ( L−1∑ l=0 Elk ))
=AA† ( I− (I−Ek) ( L−1∑ l=0 Elk )) = AA† ( I− L−1∑ l=0 Elk + L−1∑ l=0 El+1k ) =AA† ( ELk ) = AA†(I−AXk)L = ( AA† −AXk )L ,
i.e., ∥∥AA† −AXk∥∥ = ∥∥AA† −AXk−1∥∥L = eLk0 . We finish the proof.
D.2 PROOF OF PROPOSITION 1
Note that for any matrix X, the matrix polynomial of X share the same column and row space with X. In general, the iterative equation (2) does not change the column and row space from the begining. We finish the proof by induction. The conclusion is obvious for X0 due to X0 = 1c0 A
>. We assume the proposition holds for Xk, then:
A†AXk+1 = A †AXk ( L−1∑ l=0 Elk ) = Xk ( L−1∑ l=0 Elk ) = Xk+1,
and
Xk+1AA † = Xk ( L−1∑ l=0 Elk ) AA† = Xk ( L−1∑ l=0 (I−AXk)l ) AA†
=XkAA † ( L−1∑ l=0 (I−AXk)l ) = Xk+1,
Hence, we can conclude that: A†AXk = Xk, XkAA
†, ∀k. We finish the proof.
E OMITTED PROOF FOR LEARNABLE D-MINV IN SECTIONS 4 AND C
In the theoretical part, we consider a more general training regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥V{k,i}(AiX{k,i}Ai −Ai)∥∥2F , (18) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters, V{k,i} is a diagonal matrix whose diagonal entries are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v). Note that V{k,i}’s are fixed during training and testing. Introducing V{k,i} has two advantages. One is re-weighting the error term. As we can see from the fixed D-Minv, the error term decays exponentially. In that way, the smallest positive singular value dominate the gradient flow. However, for LD-Minv, we hope that the gradient flow comes from all positive singular values’ loss during training. The random re-weighting strategy is a good choice for numerical calculation when the order of singular values is unknown. Another advantage is that the introduced V{k,i}s, with very high probability (see Claim 1 in Sec. E.2), make the landscape of the training loss locally strongly convex, which is an excellent benefit from the optimization perspective.
Notably, by setting the diagonal entries of V{k,i} as the independent symmetric Bernoulli random variables, the training loss here in Eq. (18) will degenerate into the loss given in Eq. (4). Hence, all the theoretical results in this section proven for Eq. (18) also hold for Eq. (4).
It is worth mentioning that the weight matrices {V{k,i}} here has nothing to do with the learned singular vectors {Vi,k} for D-SVD. Please distinguish them.
E.1 GENERAL ANALYSIS
Before start the proof, we first make some general analysis. It is easy to see that Proposition 1 also holds for learnable D-Minv. Thus, we can observe that:
XkE l k = XkAA †Elk, ∀(k, l). Hence, we obtain an equivalent form of Eq. (3):
Xk+1 = Xk ( L−1∑ l=0 C{k,l}Ẽ l k ) , Ẽk = AA † −AXk, L ≥ 4, (19)
Due to the invariance of the row and column space, we can only concern on the singular values vectors and rewrite Eq. (19) as :
xk+1 = xk ( L−1∑ l=0 C{k,l}e l k ) , ek = 1− a xk, L ≥ 4, (20)
where 1 ∈ Rd is the all-one vector (d is the dimension of the positive singular vector σ+(A)), and:
a = σ+(A), xk = σ+(Xk), ek = σ+ ( Ẽk ) , elk = ek · · · ek︸ ︷︷ ︸
l
; (21)
recall that σ(A) the singular values vectors (no need to
Remark 4. We only utilize Eq. (19), the equivalent form of Eq. (3), in this section for the convenience of theoretical analysis. For implementation, we still use Eq. (3) and do NOT calculate the pseudo inverse A† throughout learning.
Now we continue the derivation:
ek+1 = 1− a xk+1 = 1− a xk ( L−1∑ l=0 C{k,l}e l k )
= 1− (1− ek) ( L−1∑ l=0 C{k,l}e l k ) = 1− ( L−1∑ l=0 C{k,l}e l k − L−1∑ l=0 C{k,l}e l+1 k )
= eLk + (1− Ck,0)1 + ( L−1∑ l=1 ( C{k,l−1} − C{k,l} ) elk ) + ( C{k,L−1} − 1 ) eLk
= eLk + Êkck,
(22)
where e0k = 1, Êk ∈ Rd×(L+1) := [ e0k; e 1 k; · · · ; eLk ] , (23) and
ck ∈ RL+1 := [ 1− Ck,0, C{k,0} − C{k,1}, · · · , C{k,L−2} − C{k,L−1}, C{k,L−1} − 1 ]> . (24)
We define:
`(C,A) := 1
2 K∑ k=1 ‖Vk(AXkA−A)‖2F = 1 2 ∥∥∥∥∥ K∑ k=1 vj ( a2 xk − a )∥∥∥∥∥ 2
2
= 1
2 K∑ j=1 ‖vk a ek‖22,
(25) where C = {C{k,l}, (k, l+1) ∈ [K]× [L]} is the collection of all learnable parameters for D-Minv, Vk is a diagonal matrix whose diagonal entries vk = diag (Vk) are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v), and ‖·‖2 is the vector `2 norm; recall the definitions of a, ek and xk from Eq. (21). Note that vk, ∀k is fixed during training. From Lemma 1, we know that
∥∥AA† −AXk∥∥ decays extremely fast. We can run fixed D-Minv for several times before training learnable D-Minv. In this case, X0 is the output of the K̃-layer fixed D-Minv. Hence, without loss of generality (w.l.o.g.), we suppose the following assumption holds in this section.
Assumption 2 (Well-Bounded E0). Assume ∥∥∥Ẽ0∥∥∥ = ‖e0‖∞ ≤ 12 .
We first provide several propositions of learnable D-Minv. Proposition 4 (Upper Bound for Perturbation). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l + 1) ∈ [K]× [L], then we have: ∥∥∥Ẽk∥∥∥ = ‖ek‖∞ ≤ eLk0 + 178 δ, ∀k ∈ [K]. Proof. We first show that ‖ek‖∞ < 1 2 for all k by induction. By Eq. (22), for k = 0, we know that:
‖e1‖∞ ≤ ‖e0‖ L ∞ + ∥∥∥Ê0c0∥∥∥ ∞ ≤ ‖e0‖L∞ + max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Ê0] i,j ∣∣∣∣ ‖c0‖∞ ≤ ‖e0‖L∞ + 2δ < 12 .
Now, we assume ‖ek‖∞ < 1 2 holds. Then,
‖ek+1‖∞ ≤ ‖ek‖ L ∞+ ∥∥∥Êkck∥∥∥ ∞ ≤ ‖ek‖L∞+ max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Êk] i,j ∣∣∣∣ ‖ck‖∞ ≤ ‖ek‖L∞+2δ < 12 .
Since δ < 18 , we have 2δ < 1 4 . By Lemma 3, we can conclude that:
‖ek‖∞ ≤ e Lk 0 + (1 + )2δ, where 1
16 > → 0 as k →∞.
We finish the proof.
Proposition 5 (Upper Bound First-Order Derivative). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l+ 1) ∈ [K]× [L] and ‖a‖∞ ≤ 1, then we have:
∂`(C,A) ∂C{k,l} ≤ 4 ( 1 3 )l b̄2vd.
Proof. Note that:
∂`(C,A)
∂C{k,l} = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂C{k,l}
〉 .
By Eq. (22), we also have: ∂ek+1 ∂ek = Diag ( LeL−1k + Ê ′ kck ) ,
where Diag(e) is a diagonal matrix with the diagonal entries being e, and: Ê′k ∈ Rd×(L+1) := [ 0; e0k; · · · ;LeL−1k ] ;
here 0 ∈ Rd is the all zero vector. Hence, it holds that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ = ∥∥∥LeL−1k + Ê′kck∥∥∥∞ ≤ LeL−1k + ek(1− ek)2 · δ, (26)
where ek = ‖ek‖∞. By Proposition 4, we know that ek ≤ eL k 0 + 17 8 δ < 1 3 . Thus, we have:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 . (27) It is easy to see:
∂`(C,A)
∂ek = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂ek
〉 .
Hence, we have∥∥∥∥∂`(C,A)∂ek̂ ∥∥∥∥ 2 ≤ K∑ k̂=k (∥∥vk̂ a (vk̂ a ek̂)∥∥2) · 4k−k̂ < 2b̄2v√d. According to Eq. (22), we notice:∥∥∥∥ ∂ek+1∂C{k,l} ∥∥∥∥ 2 = ∥∥el+1k − elk∥∥2 ≤ 2∥∥elk∥∥2.
Combing all things together, we conclude that:∣∣∣∣∂`(C,A)∂C{k,l} ∣∣∣∣ ≤ K∑
k̂=k
4k−k̂ ( 4b̄2v √ d ∥∥elk∥∥2) ≤ 4(13 )l b̄2vd.
We finish the proof.
Proposition 6 (Second Order Approximation). Define a set of coefficient collection around 1: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
For any C1,C2 ∈ C∗, we have:
`(C1,A) = `(C2,A) +
〈 ∂`(C2,A)
∂C ,C1 −C2
〉 +O ( b̄2vdKL ) ‖C1 −C2‖22.
Proof. Note that for any C ∈ C∗, we have: ∂2`(C,A)
∂C{k,l}∂C{k′,l′} =
〈 ∂`(C,A)
∂eK , ∂2eK ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉 = 〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ ∂ 2eK−1 ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂`(C,A)
∂eK , ∂2eK ∂eK−1∂eK−1 ( ∂eK−1 ∂C{k,l} , ∂eK−1 ∂C{k′,l′} )〉 +
〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉
= K∑ k̂=max{k,k′}
〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ · · · ◦ ∂2ek̂+1 ∂ek̂∂ek̂
( ∂ek̂
∂C{k,l} , ∂ek̂ ∂C{k′,l′}
)〉 + ∆(k, k′, l, l′),
where we let ek+1 := `(C,A) with a little abuse of notation, and:
∆(k, k′, l, l′) = 0 if (k, l) = (k′, l′),〈
∂`(C,A) ∂eK , ∂eK∂eK−1 ◦ · · · ◦ ∂(el+1k −e l k) ∂C{k′,l′}
〉 otherwise.
Hence we can get the upper bound for the entry ∂ 2`(C,A)
∂C{k,l}∂C{k′,l′} . From the proof in Proposition 5,
we know that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 , ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 1 ≤ d ( 1 4 )k̂−k+l , and ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 2 ≤ 2 √ d ( 1 4 )k̂−k+l .
Note that: ∂2ek+1 ∂ek∂ek = TDiag | 1. What are the main contributions and novel aspects of the paper regarding matrix inverse and SVD algorithms?
2. What are the strengths and weaknesses of the proposed approach compared to prior works?
3. How does the reviewer assess the significance and practicality of the paper's applications, particularly in non-blind deconvolution?
4. Are there any concerns or questions regarding the theoretical and empirical analyses presented in the paper?
5. How does the reviewer evaluate the clarity and quality of the writing style and organization in the paper? | Review | Review
The paper presents a fast differentiable algorithm for matrix inverse and SVD, with provable guarantees. This makes it possible to "train" the algorithm in order to tune it to a particular data distribution. Finally, the paper analyzes the sample complexity of the proposed training method.
Pros:
The problems studied (matrix inverse and SVD) are some of the most widely used subroutines in data analysis. Differentiable algorithms for those problems could be very useful (although I am not sure what is the state of the art in this area).
The sample complexity analysis is rather rare in this line of research - most of the comparisons in the literature are empirical.
The paper demonstrates empirical improvements (on real data) of the proposed approach, in the context of non-blind deconvolution (D-NbD).
Cons:
If I understand correctly, the higher-order iterative method for matrix inverse computation is a known technique in the numerical analysis community. So the contribution here is in "neuralizing" this approach and evaluating it theoretically and empirically.
For SVD, I could not find any empirical comparison between the proposed technique and the power method. Is the dependence on the latter on the eigenvalue gap an issue in applications ?
Minor comments:
The writing style could be better. For example, these two sentences alternate between the third and the second person:
"Specifically, manifold GD first updates the variable in the manifold tangents pace along the objective function’s projected gradient. Then, map the updated variable in the tan-gent space to a feasible point on the geodesic..." |
ICLR | Title
Fast and Differentiable Matrix Inverse and Its Extension to SVD
Abstract
Matrix inverse (Minv) and singular value decomposition (SVD) are among the most widely used matrix operations in massive data analysis, machine learning, and statistics. Although well-studied, they still encounter difficulties in practical use due to inefficiency and non-differentiability. In this paper, we aim at solving efficiency and differentiability issues through learning-based methods. First of all, to perform matrix inverse, we provide a differentiable yet efficient way, named LD-Minv, which is a learnable deep neural network (DNN) with each layer being an L-th order matrix polynomial. We show that, with proper initialization, the difference between LD-Minv’s output and exact pseudo-inverse is in the order O ( exp { −L }) , where K is the depth of the LD-Minv. Moreover, by learning from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
N/A
}) , where K is the depth of the LD-Minv. Moreover, by learning
from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
1 INTRODUCTION
Matrix inverse (including matrix pseudo-inverse) and singular value decomposition are fundamental linear algebra operations, ubiquitous in machine learning, statistics, signal processing, and other fields. In general, solving scientific computing or optimization problems often need to perform these two operators, such as matrix (pseudo) inverse for least squares regression, singular value decomposition (SVD) for dimensionality reduction (PCA), the low-rank related problems (Liu et al., 2010; Zhang et al., 2018; Liu et al., 2013), the graph-based issues (Wu et al., 2020a; Von Luxburg, 2007), and even for training deep neural networks (DNNs) with structured layers (Ionescu et al., 2015). Nevertheless, Minv and SVD appear less and less often in the modern machine learning tasks. One reason is inefficiency. Computing the SVD and the Minv can be extremely time-consuming for large-scale problems; however, efficiency is a significant concern in the current big data and deep learning era. Besides, non-differentiability is considered another reason that blocks the use of SVD and Minv. Usually, most prevalent methods for training DNNs are first-order and are based on the backpropagation; however, Minv and SVD are not necessarily continuous functions of the matrix entries (Stewart, 1969). Therefore, derivatives are not always existent. Although Minv and SVD may be backprop-able due to some specific implementation, they are unstable and are essentially nondifferentiable. Thus the gradients cannot pass through them when backpropagating. Considering the above problems, one natural question emerges.
Does there exist an efficient and differentiable way to perform Minv and SVD?
Over the last decade, many sketches-based methods have been developed, e.g., Nelson & Nguyen (2013); Meng & Mahoney (2013); Cohen et al. (2015); Indyk et al. (2019). The main idea of the sketches-based techniques is to use random projections, which is efficiently computable, to reduce the problem size before performing SVD and Minv. However, they do not solve the problem of non-differentiability as smaller-sized SVD and Minv still needs to be computed.
Recently, “differentiable learning-based” methods (D-LbM) have attracted considerable attention (Chen et al., 2018; Indyk et al., 2019; Liu et al., 2019; Wu et al., 2020b; Xie et al., 2019). These methods usually unroll the classical optimization or numerical iterative algorithms and introduce the learnable parameters to obtain a learnable DNN. In general, the iterative algorithms inspired DNNs consist of differentiable operators such as matrix polynomials. Benefiting from training on the data, D-LbMs can execute in much fewer iterations with similar per-iteration cost as original algorithms but obtain much better performance. Many empirical results, e.g., Gregor & LeCun (2010); Yang et al. (2016); Peng et al. (2018), show that a well-trained DNN provided by D-LbM—compared with the original optimization algorithm—can obtain an almost equally good solution using one or even two order-of-magnitude fewer iterations. Based on these observations, we aim to find a learning-based iterative way to differentiate Minv and SVD in this paper.
However, all these D-LbMs suffer from two common problems. One is the convergence guarantee. The forward process of D-LbMs may diverge, even if the original unrolled iterative algorithm has a well-behaved convergence guarantee. In fact, most D-LbM methods have little or no convergence guarantees. To the best of our knowledge, there is no work to reveal the behavior of D-LbM during training. How does the training loss decrease? How big is the gap between the output of D-LbM and that of the original algorithm? All these questions are still open. Another problem is, both for D-LbM and the original algorithm, a limited theory exists when the input data obey a common distribution (e.g., data drawn from a low-dimensional manifold). Essentially, by learning from data, D-LbMs can obtain a problem-dependent parameter bias. It can help D-LbMs get the better performance on a specific data distribution within much fewer computation cost, instead of simply fixing the parameter ahead of time like traditional iterative methods. But there is no mathematical result to describe this phenomenon strictly. Moreover, it is unknown whether the trained D-LbMs generalize well on data from the same distribution.
Remarkably, in this paper we provide a learnable, differentiable and efficient way to perform Minv, named Learnable Differentiable Matrix Inverse (LD-Minv) and solve the above two problems by our proposed LD-Minv. First of all, LD-Minv is a DNN with each layer being an L-th order matrix polynomial; and the coefficient of each power is learnable. We show that LD-Minv converges to the Minv or pseudo-inverse with order L if the coefficients of polynomial are set properly. Namely, the difference between the output of the DNN and the exact pseudo-inverse is in the order O ( exp { −LK }) , where K is the depth of LD-Minv. Secondly, by learning from data, LD-Minv can further improve the and precision on some specific data distribution. Namely, the distance between the output and the exact pseudo-inverse can be arbitrarily small. Specifically, the training loss converges to zero exponentially fast, i.e., gradient descent (GD) finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations, where n is the data size. Finally, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings.
With LD-Minv at hand, as a direct application, we propose a learning-based optimization to solve any convex problems with non-convex orthogonality constraints. Then we use it to differentiate SVD. Note that we also provide a generalization guarantee for our D-SVD. Unlike the previous work on differentiable SVD (Indyk et al., 2019), which is based on the power method and needs an assumption on the gap between singular values to ensure convergence, our method can converge to the solution without any gap assumption. In summary, our main contributions include:
• We propose a differentiable method, LD-Minv, to perform matrix inverse efficiently. LDMinv, inspiring a DNN with each layer being a learnable L-th order matrix polynomial, would be useful for accelerating and differentiating general matrix decompositions. We show that LD-Minv can converge to the matrix pseudo-inverse with order L.
• By learning, LD-Minv can further improve the approximation performance on some underlying data distribution. We prove that GD finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations under mild conditions. Moreover, we also provide the generalization bound for LD-Minv. We reveal that the empirical Rademacher complexity of the loss function class is bounded by Õ(min{d3L/K1/2, √ d3K/n}), where Õ(·) hides
log factors, and K and d are the depth and the width of LD-Minv, respectively.
• As a direct application, we further provide a learning-based general framework to solve the problems with orthogonality constraints. This D-LbM helps us to differentiate SVD. Finally, we also provide a generalization guarantee for our D-SVD.
2 DIFFERENTIABLE MATRIX INVERSE
We introduce the proposed D-Minv in this section. We first present an intuitive idea to show how to approximate the matrix inverse iteratively. Then we generalize the iterative method to obtain our fixed and learnable D-Minv, separately
2.1 FIXED DIFFERENTIABLE MATRIX INVERSE
Given a matrix A ∈ Rd×d, we want to approximate its inverse A−1 by the matrix X ∈ Rd×d, i.e., AX ≈ I, where I ∈ Rd×d is the identity matrix. Ideally, X is the fixed point of the following equation:
X = X(AX) −1 = X ( ∞∑ l=0 (I−AX)l ) , (1)
where the last equality follows from the Neumann series when AX ≈ I and (I−AX)l is the l-order matrix polynomial.
Inspired by the above iterative process, it is obvious that we can obtain the matrix X iteratively. Hence, we consider the following higher-order iterative method, which is an L-th order matrix polynomial in each iteration, where L ≥ 4:
Xk+1 = Xk ( L−1∑ l=0 Elk ) , Ek = I−AXk. (2)
Note that DNNs can easily implement this method: one layer corresponds to one iterative step. Since there are no parameters to learn in Eq. (2), and it only involves the matrix multiplication (thus differentiable), we name it fixed D-Minv 1. One may doubt that the computational cost of each iteration is high in Eq. (2). However, as shown in Lemma 1, higher-order polynomial usually implies a faster convergence rate. For obtaining the same precision with different L, the total computational cost is in the order of O(L/ ln(L)), which is a very slow growth rate w.r.t. L. Although simple, fixed D-Minv converges to the inverse A−1 or A† extremely fast. Lemma 1 (Approximation Speed). The generated sequence {Xk ∈ Rd1×d2} by Eq. (2) converges to the Moore–Penrose inverse A† with order L provided that X0 = 1c0 A >, c0 > 12‖A‖ 2, i.e., we can conclude: ∥∥AA† −AXk∥∥ = eLk0 , in which e0 = ∥∥AA† −AX0∥∥ < 1, where ‖·‖ is the spectral norm.
We can see that fixed D-Minv converges with order L, i.e., a shallow D-Minv can approximate the matrix inverse very well. Moreover, provided with the X0 given in Lemma 1, the column space and the row space of the matrix Xk are exactly correct from the beginning. Proposition 1 (Invariant Column and Row Spaces). With the same setting in Lemma 1, for any k ≥ 0, it holds that:
XkAA † = Xk, A †AXk = Xk.
By Proposition 1, it is easy to see that the zero singular values remain unchanged during computing, which indicates that D-Minv works consistently on the full and rank-deficient square and rectangle matrices. Note that this proposition also holds for the upcoming LD-Minv.
2.2 LEARNABLE DIFFERENTIABLE MATRIX INVERSE
We consider the learnable D-Minv (LD-Minv) with the following iterative formula:
Xk+1 = Xk ( L−1∑ l=0 C{k,l}E l k ) , Ek = I−AXk, L ≥ 4, (3)
1Higher-order method Eq. (2) is already well-known in the applied mathematics community in another equivalent form, see Climent et al. (2001); Amat et al. (2003); Li et al. (2011).
where (k, l+ 1) ∈ [K]× [L], C{k,l} ∈ R are learnable coefficients, and ‖·‖F is the Frobenius norm. LD-Minv is also in the category of LbMs that unroll the conventional numerical iteration methods. As discussed in the introduction, two problems, i.e., no convergence analysis for the training procedure and no generalization guarantee for the trained DNN, block the further theoretical exploration of these LbMs. However, in contrast to the previous works, we obtain favorable theoretical results on both of the two problems for our LD-Minv. First, although fixed D-Minv already converges extremely fast, LD-Minv can still perform much better on the training data under mild conditions. Specifically,
∥∥XK −A†∥∥ converges to zero exponentially fast during training, i.e., GD finds an - error minimum in `2 regression using at mostO(nKL log 1/ ) iterations 2. Second, LD-Minv has a tight generalization bound to guarantee the performance on the unseen matrix when it comes from the same distribution as the training instances. We provide the rigorous theoretical results in Sec. 4.
Discussion 1. One can find that the final iterate XK can be written as matrix polynomial of A, and may argue that approximating the Minv by the polynomial is well-studied. Zero training can be obtained when the polynomial’s order is large enough, and the approximation error controls the generalization error. For the sake of distinction, we make three clarifications. First of all, most traditional results only describe the behavior in the worst case, which holds for any input matrix and may easily fail in some extreme cases. However, LD-Minv focuses on a more realistic situation, i.e., the input matrices obey some common distribution, such as the affinity matrices of Facebook users or the sampled CT and MRI images. How to learn from the data and design a more efficient method is still open, not to mention the theoretical generalization guarantee on specific data distribution. Secondly, the traditional polynomial approximation method is susceptible to the polynomial order and is very easy to over-fitting when the order is over-parameterized, say order nd. Without precise data distribution density, it is impossible to choose the proper order to ensure zero training and guarantee the small generalization error, simultaneously. To solve this, we may need some complex and data-driven regularization strategy. In sharp contrast, LD-Minv is robust to the order, and our theoretical results (i.e., zero training and small generalization error) hold well from underparameterized to over-parameterized cases. Thirdly, the exact polynomial fitting can only imply the existence of zero training solution, while our results describe the convergence behavior (i.e., which solution we will choose) . Obviously, there exists a big gap between the convergence of training and the existence of a solution.
Note that we have considered a much easier matrix polynomial: XK = ∑L l=0 ClA
l. Learning the coefficients {Cl}Ll=1 becomes an easy-to-solve convex regression problem unlike the convex one in Eq. (3). Unfortunately, we failed due to some stability issues, e.g, the coefficients in different powers vary significantly which cause the poor generalization performance.
Discussion 2. Most matrix iterative methods suffer from the ill-condition problem, which usually brings instability and makes the analysis and calculation hard to carry on, e.g., for D-Minv, large condition number may significantly slow the convergence speed (see Lemma 1). However, learningbased method can greatly alleviate this ill-condition problem by learning from data. LD-Minv can beat D-Minv easily when the condition number of the input matrix is large and the learned coefficients for LD-Minv are also valid on the unseen data. See the experimental part for more details.
2.3 TRAINING SETTINGS
We now describe some training settings for D-Minv. Denote by {Ai}ni=1 the training set consisting of n samples. Let X{k,i} ∈ Rd1×d2 be the output of the k-th layer on the i-th training sample Ai ∈ Rd2×d1 . We consider a typical regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥AiX{k,i}Ai −Ai∥∥2F , (4) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters. Note that when X = A†, the properties of the Moore–Penrose inverse indicate the equation AXA = A, which can further indicates the equation XAX = X in our invariant space case, see Proposition 1. Therefore, we only use the difference term (AXA−A) in the training loss.
2Please distinguish the two “convergence” here. One describes Xk approaching A† w.r.t. k and L, and the other refers to the behavior of LD-Minv’s loss w.r.t. training iteration.
Algorithm 1 Gradient descent (GD) with proper initialization for LD-Minv Input: Training data {Ai}ni=1, number of iterations T , step size η.
1: Generate each C{k,l} ∈ C such that |C{k,l} − 1| = O(K−α), where α > 14 . 2: Set X{0,i} = 1c0 A > i or X̃i, where c0 > 1 2‖Ai‖
2, X̃i is the output of a fixed D-Minv. 3: for t = 1, · · · , T do 4: Update C(t+1) = C(t) − η · D
[C=C(t)] Ln.
5: end for Output: C(0), . . . ,C(T ).
Thanks to the differentiability of LD-Minv, we adopt GD to minimize the training loss to obtain the proper coefficients C. We present the initialization strategy and training process in Algorithm 1, where the first derivative of differentiable function Ln(C) : RK×L → R at C(t) is denoted by D[C=C(t)]Ln(·) := ( ∂Ln(C(t)) / ∂C ) ∈ RK×L. When K is large, one may notice that our
coefficient is close to the oracle value 1, which may provide a relatively good initial loss L ( C(0) ) . Notably, this does NOT mean that we will treat the learning and initialization as a perturbation of the fixed D-Minv and bound the loss by the good-enough fixed D-Minv’s performance plus the perturbation error. On the contrary, as we will show in theoretical results part, learnability indeed matters, and LD-Minv can obtain better performance than the fixed one by training. The real purpose of this initialization is to take advantage of D-Minv’s local Lipschitz continuity near C = 1, where 1 is the all one vector. The continuity reduces the complexity of optimization and will benefit the generalization of D-Minv.
3 LEARNING-BASED OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS
Considering the following general orthogonality constrained optimization problem:
min U
f(U), s.t. U>U = I, (5)
where the objective function f(U) : Rm×r → R is differentiable and convex. The usual way to solve Eq. (5) is to perform the manifold GD on the Stiefel manifold, which evolves along the manifold geodesics. Specifically, manifold GD first updates the variable in the manifold tangent space along the objective function’s projected gradient. Then, map the updated variable in the tangent space to a feasible point on the geodesic, and repeat these two steps until convergence (Edelman et al., 1998). Usually, the mapping step is non-differentiable and inefficient. Fortunately, the work (Wen & Yin, 2013; Ren & Lin, 2013) develops a technique to solve the optimization problem with orthogonal constrains approximately, which only involves matrix multiplication and inverse.
We let U ∈ Rm×r, where r is the Stiefel manifold’s dimension. Denote by G ∈ Rm×r the gradient of the objective function f(U) in Eq. (5) w.r.t. U at Uk, then the projection of G in the tangent space of the Stiefel manifold at Uk is PUk3, where P = GU>k −UkG> and P ∈ Rm×m. Instead of parameterizing the geodesic of the Stiefel manifold along direction P using the exponential function, we generate the feasible points to update Uk by the following Cayley transform (Wen & Yin, 2013):
Uk+1 = U(t) = C(t)Uk, where C(t) = ( I + t
2 P
)−1( I− t
2 P
) , (6)
where I is the identity matrix and t ∈ R is the step size used for updating the current Uk. In other words, U(t) is a re-parameterized local geodesic w.r.t. t on the Stiefel manifold. One can easily verify that U(t) has the following properties, given U>k Uk = I: (1) ddtU(0) = −PUk, (2) U(t) is smooth in t, (3) U(0) = Uk, (4) U (t) > U (t) = I, ∀t ∈ R. It is evident that, if t is in a proper range, Uk+1 can lead to a lower objective function value than U(0) = Uk on the Stiefel manifold. Besides, computing Uk+1 only involves the matrix multiplication and matrix inverse, which can be easily performed by our LD-Minv in Eq. (3. Therefore, we let C(t) = LD-Minv (I + tP/2) in Eq. (6). Then we can obtain a learning-based optimization method for the problem with orthogonality constraints in Eq. (5).
3We choose the canonical metric on the tangent space as the equipped Riemannian metric.
3.1 APPLICATION: DIFFERENTIABLE SVD BY LD-MINV
In this part, we show how to utilize the previous general learning-based optimization framework to differentiate SVD. Given an arbitrary matrix M ∈ Rm×n̂ and its skinny SVD M = ŨS̃Ṽ>, the Von Neumann’s trace inequality, 〈M,X〉 ≤ ∑ i σi(M)σi(X), implies that {Ũ ∈ Rm×r, Ṽ ∈ Rn̂×r} is an optimal solution of the following optimization problem:
min U,V f(U,V) :=
( 1
2 ‖Dc (Λ)‖2F −
〈 M,UV> 〉) , s.t. U>U = V>V = I, (7)
where Λ := U>MV, Diag(·) returns the collection of the diagonal entries of the input matrix, and Dc(Λ) := (Id−Diag)(Λ) := Λ−Diag (Λ) for any input matrix Λ. Note that solving the problem in Eq. (7) is equivalent to performing SVD. Let {M,U0,V0} be the inputs of our Differentiable SVD (D-SVD). We adopt the Cayley transform in Eq. (6) to solve the problem in Eq. (7), and replace the matrix inverse in it by our proposed LD-Minv in Eq. (3). W.l.o.g, in the following, we only focus on the updating strategy of U due to the similarity between U and V. We first compute the gradient of the objective function f(U,V) w.r.t. U at {Uk,Vk}, which we shall denote by G = MVk ( Dc ( V>k M >Uk ) − I ) . Then we find a geodesic curve U(t) along the gradient on the Stiefel manifold for updating Uk. Referring to Eq. (6), the curve is U(t) = ( I + t2P )−1( I− t2P ) Uk, where P = GU>k −UkG>. Given the step size t, we can use
LD-Minv to perform the matrix inverse on the factor ( I + t2P ) . In summery, D-SVD consists of two steps: (1) find a proper t; (2) update {Uk,Vk} by the Cayley transform. For finding a proper step size t, we consider the following problem:
t∗U = argmin 0≤t≤ε f(t) := f(U(t),Vk), (8)
where ε is a given parameter to ensure a small enough magnitude of t∗U . Notice that if t is small enough to hold ∥∥ t 2P ∥∥ < 1, then we have (I + t2P)−1 = I + ∑∞l=1 (− t2P)l, which implies that
U(t) = ( I + 2 ∑∞ l=1 ( − t2P )l) Uk. Considering that t∗ in Eq. (8) is small, we can approximate
f(t) via its second order Taylor expansion at t = 0:
f(t) = f(0) + f ′(0) · t+ 1 2 f ′′(0) · t2 +O
( t2 ) , (9)
where f ′(0) and f ′′(0) are the first and the second order derivatives of f(t) evaluated at 0, respectively. These two derivatives have closed form and can be computed efficiently. Consequently, we can obtain an approximated optimal solution t∗ via:
t∗U = min { ε, t̃ } , where ε < 2/‖P‖, and t̃ = −f ′(0)/f ′′(0), (10)
provided that:{ f ′(0) = 〈 Dc ( U>k MVk ) ,Dc ( U>k PMVk )〉 + 〈MVk,PUk〉 ,
f ′′(0) = ∥∥Dc (U>k PMVk)∥∥2F + 〈Dc (U>k MVk),Dc (U>k P2MVk)〉− 〈MVk,P2Uk〉 .
(11)
Then we can update U by U(t∗U ), i.e., Uk+1 = U(t ∗ U ). Thanks to the cyclic property of trace operator, Vk+1 shares a similar update strategy with Uk+1.
Provided the input {M,Uk,Vk}, the complete iterative steps for our D-SVD are as follows4: GU = MVk ( Dc ( V>k M >Uk ) − I ) , PU = GUU > k −UkG>U , t∗U = min { 2 ‖PU‖ ,− f ′(U(t),Vk) f ′′(U(t),Vk) } + t̃Uk , Uk+1 = Cayley (t ∗ U ,CUk ,PU ,Uk), GV = M >Uk+1 ( Dc ( U>k+1MVk ) − I ) , PV = GV V > k −VkG>V ,
t∗V = min
{ 2
‖PV ‖ ,− f
′(Uk+1,V(t)) f ′′(Uk+1,V(t))
} + t̃Vk , Vk+1 = Cayley (t ∗ V ,CVk ,PV ,Vk),
(12)
4For convenience and clearing writing, we omit the superscript in the updating rules.
Algorithm 2 Forward Propagation for D-SVD Input: Training data {Mi,Ui,0,Vi,0}ni=1, depth Ksvd of D-SVD, number of iterations Tinv, step
size ηinv and D-SVD’s current parameters t̃. 1: for k = 0, . . . ,Ksvd do 2: Calculate PU,i and t∗U,i with {Mi,Ui,k,Vi,k}ni=1 by Eq. (12). 3: Train LD-Minv with parameters CUk by Algorithm 1 with the data {Ai}ni=1, number of iterations Tinv and step size ηinv, where Ai = (I + t∗U,iPU,i/2). 4: Repeat Steps 2 and 3 by switching the roles of U and V with the input {Ui,k+1,Vi,k}ni=1. 5: end for
Output: {CUk ,CVk} Ksvd k=1 and {Ui,k+1,Vi,k} n,Ksvd i=1,k=1.
Algorithm 3 Joint Training for D-SVD and LD-Minv Input: Training data {Mi}ni=1, number of iterations Tinv and Tsvd, step sizes ηsvd and ηinv.
1: Generate {Ui,0,Vi,0}ni=1 randomly from the r-dimensional Stiefel manifold. Set t̃(0) = 0. 2: for t = 1, · · · , Tsvd do 3: Forward propagate by Algorithm 2. 4: Update t̃(t+1) = t̃(t) − ηsvd · D[t̃=t̃(t)]Mn, whereMn := 1 nKsvd ∑n i=1 ∑Ksvd k=1 f(Ui,kVi,k).
5: end for Output: {Ui,k+1,Vi,k}n,Ksvdi=1,k=1.
where Cayley (t,C,P,U) := LD-Minv (I + tP/2,C)(I− tP/2)U,
t̃ := {(t̃Uk , t̃Vk) ∈ R2} Ksvd k=1 is the collection of learnable parameters for D-SVD, and LD-Minv(·,C) is a LD-Minv module with parameters C. Note that a DNN can easily implement our D-SVD. Each layer implements the steps in Eq. (12), which are the specific iterative procedures of the previous general learning-based optimization with orthogonality constraints. We adopt different LD-Minvs for different layers, and provide the training process for D-SVD in Algorithms 2 and 3. By introducing LD-Minv into the Cayley transform, we bypass the non-differentiable exact Minv, and solve the problem in Eq. (7) (i.e., perform SVD) in a differentiable and learning-based way.
4 MAIN RESULTS
In this section, we provide the main theoretical results of D-Minv, including the linear convergence of GD during training and the generalization performance. All the detailed proofs of these theorems are provided in the supplementary material. The theoretical results for D-SVD are presented in the supplementary material due to limited space.
4.1 CONVERGENCE RATE OF GD
We consider the general LD-Minv and large training data size n in this section. We first make the following assumption on the training data.
Assumption 1 (Bounded Singular Values). Given the training matrices { Ai ∈ Rd1×d2 }n i=1
, we assume the training matrices’ positive singular value vectors are d-dimensional, where d ≤ min{d1, d2}. Assume these positive singular values have lower and upper bounds, i.e.,
‖xi‖∞ ≤ 1 and min j∈[d] [xi]j ≥ b̄a > 0, where xi = σ+(Ai) ∈ R d, ∀i ∈ [n],
and σ+(·) extract the positive singular values.
In general, the boundedness assumption for the singular values the is weak and easy to satisfy. See the discussion in the supplementary material (Section C.1.2).
With this assumption, we can obtain the convergence rate of our Algorithm 1.
Theorem 1 (Convergence Rate). Suppose Assumption 1 holds and let d = min{d1, d2}. For any t, > 0, we let K = Ω ( db̄2a ) , then with probability at least 1− exp ( −O ( K/b̄2a )) , GD in Algorithm
1 with step size η = Θ(1/dL) can find a collection of coefficients such that:
Ln(C(T )) < , for T = Θ ( dLKnb̄−2a ln (1/ ) ) ,
where b̄a is the lower bound for the positive singular values of training data Ai.
This is known as the linear convergence rate because drops exponentially fast in T . Recall that K and L are from the definition of our LD-Minv.
Remark 1. Different from the traditional theoretical results of DNNs, the randomness in our results mainly comes from a re-weighting strategy (see Sec. E) rather than initialization. In general, our results hold for any initialization strategy as long as the condition in Algorithm 1 is met. Notably, our results do not require the depth or the width to be in the polynomial order of data size n, which is a prevalent over-parameterization assumption for the current deep learning theories.
Remark 2. To present our convergence result most simply, we choose to focus on GD method with fixed step size mainly. It shall be easy to extend our results to more general settings, such as stochastic GD and dynamic step size.
4.2 GENERALIZATION
We characterize the generalization performance of LD-Minv trained by GD in this theorem.
Theorem 2 (Generalization Bound). Denote by PA the distribution of Ai in Assumption 1. Suppose that α ≥ 1 andK2α−1/2 = Ω( √ n), then with probability at least 1−δ, the iterate C(t) of Algorithm
1 holds that:
E A∼PA
[ Ln ( C(t) )] ≤ Ln ( C(t) ) + Õ ( min { b̄2wd 3L
Kα/2 ,
√ d3K
n
}) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , T , where EA∼PA [Ln] is the expected value of Ln under the probability measure PA and Õ(·) hides log factors.
We adopt two ways to upper bound the empirical Rademacher complexity, which corresponds to the cases K < n and K > n, respectively. The final bound is the minimum of them (middle term in the above upper bound).
Remark 3. The second term in our bound distinguishes our result from most previous work (AllenZhu et al., 2019; Arora et al., 2019; Yehudai & Shamir, 2019; Cao & Gu, 2019; Chen et al., 2019; Xie et al., 2020) on the generalization bounds of over-parameterized neural networks. Specifically, most over-parameterized work mainly focus on establishing the bound that does not explode when the network width goes to infinity. However, their bounds are exponential in the depth, which may not hold when the depth is large. In contrast, our result covers a wider range of depth K: it covers the case of both K < n and K > n. One may wonder whether our bound will explore when K approaches infinity. The answer is no. We can observe a “double descent” trend to certain extent: the generalization error bound first increases with the network depth K when K < n, then it starts to decrease when K becomes larger even for K n.
5 CONCLUSION
We provide a differentiable and learnable way, named LD-Minv, to perform the matrix inverse by using a L-th order matrix polynomials. Then, we offer some theoretical guarantees about the convergence during training and generalization ability in both under-parameterization and overparameterization setting for LD-Minv. Remarkably, we are the first to provide the strict analysis for the training process of the differentiable learning-based methods. As an application of LD-Minv, we propose a learning-based optimization method to solve the problems with non-convex orthogonality constraints, and then utilize it to differentiate SVD. A generalization guarantee for D-SVD is also provided.
APPENDIX
A EXPERIMENTAL RESULTS
A.1 EFFECTIVENESS VALIDATION
We conduct experiments on synthetic data to validate the effectiveness of our proposed LD-Minv and D-SVD. We run our all experiments for 10 times and report the average results.
A.1.1 EXPERIMENTS RESULTS FOR LD-MINV
Settings. For LD-Minv, we adopt the square and rectangle matrix to test its performance. It should be mentioned that our LD-Minv is adaptive to the shape of the input matrix. In general, the square matrix can better test the performance of the algorithm, since the minimum singular value is near 0 in this case (Vershynin, 2010). And a large condition number of the input may make most numerical matrix calculation methods unstable. For the square matrix A ∈ Rd×d, its elements are sampled from i.i.d. Gaussian, namelyAi,j ∼ N (0, 1/ √ d). For the rectangle matrix A ∈ Rm×n̂, its elements
are also sampled from Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We adopt the SGD to optimize the parameters. Note that our theoretical results in Sec. 4 hold for GD, but they can easily extend to the SGD case. The learning rate is set to 1e-4, and it is reduced by half every 300 iterations. We also fix the batch size as 100, while the number of iteration is set to 1200. Note that, in the experiments, the adopted exact Pseudo matrix inverse (Minv) is implemented by Pytorch.
We adopt the following test loss to measure the performace:
Test Loss := 1
n n∑ i=1 ∥∥AiX{Kinv,i}Ai −Ai∥∥2F , where X{Kinv,i} is the final output of D-Minv or LD-Minv. Here n = 100 is the size of test data. With d = 100, we first test the influence of L and K, which correspond to the order of matrix polynomial and the number of layers in our LD-Minv, respectively. Due to high precision of LDMinv and D-Minv, for visualization, we take a base-10 logarithm on loss in this experiment.
Results. From Fig. 1(a), we can see that the number of layer K plays a more critical role for LDMinv than the order L of polynomial. When settingK = 7, the log of the test error drops from−1.0 for L = 4 to −4.6 for L = 9. However, the log of the test error drops from 0.32 for K = 2 to −4.6 for K = 7, when fixing L = 9. Hence, for efficiency, we suggest utilizing large K and small L for LD-Minv.
Compared with D-Minv, according to Fig. 1(b) and Fig. 1(c), LD-Minv can obtain a better test performance even for largeK andL, although D-Minv converges to Minv extremely fast in this case. Our result demonstrates that learning can help LD-Minv obtain better performance on a problemdependent data distribution rather than merely fixing the coefficients ahead for time. We notice that LD-Minv and D-Minv obtain a similar precision when K > 15, and the reason is that the approximation error for D-Minv is in the order of 1e− 8 in this case, which lifts very limited space for the improvement by learning.
Fig. 1(b) further verifies the importance of K. As shown in our theoretical results (Theorem 1 and Theorem 2); large K indicates smaller generalization bound and higher probability to obtain the linear convergence rate for Algorithm 1.
Another significant advantage of our proposed method is efficiency. D-Minv and LD-Minv can inverse the input matrix with high precision meanwhile with the low computational cost. The benefits of D-Minv and LD-Minv are apparent when the matrix dimension is large. As shown in Table 1, even for large K and L, the inference time of LD-Minv is less than a tenth of the time of exact matrix inverse when the size is 1000 × 1500. It is well-known that the Minv and matrix multiplication (MatMul) share the same computation complexity theoretically, while their speeds vary a lot numerically. The computation of our method is mostly spent on MatMul, which can be easily accelerated by GPU and is much cheaper than Minv. MatMul and Minv only share the same complexity asymptotically. The constants in the orderO(·) vary significantly, especially when matrix dimension d ∞. Moreover, the general MatMul algorithm with a complexity of O ( d3 )
isn’t optimal, and there exist algorithms that own a complexity of only O ( d2.3 ) .
To further investigate the importance of the learning procedure, we conduct the experiments on the matrices with different condition numbers. Given the condition number κ ≥ 1, we generate the data matrix by Ai = UiSiVi, where Ui and Vi are randomly sampled from a 100-dimensional
Stiefel manifold. Here Si is a diagonal matrix whose diagonal entries are i.i.d. and obey a uniform distribution U(1/κ, 1) with distribution boundaries being [1/κ, 1]. We let K = 4, L = 10 and the batch size be 100. We show the results in Table 2. Not surprisingly, D-Minv suffers from ill-condition problems, e.g., the large condition number deteriorates the performance of D-Minv. However, LD-Minv is not sensitive to the condition number, and it can beat D-Minv easily when the condition number is large. Note that all the results are reported on the unseen data.
A.1.2 EXPERIMENTAL RESULTS FOR D-SVD
Settings. For D-SVD, we adopt the rectangle matrix to run the experiments. Similarly, our algorithms also work for the square matrix. For the rectangle matrix A ∈ Rm×n̂, its elements are also sampled from i.i.d. Gaussian, namely Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We let m = 50 and n̂ = 100, and utilize Algorithm 3 to solve our learning problem5. Similarly, we also perform the batch training with batch size 30. The learning rate is set to 1e − 2 for D-SVD and 1e − 4 for LD-Minv contained in D-SVD. We do not decay the learning rate during training, and let the number of iterations for D-SVD and LD-Minv be 200 and 100, respectively. For LD-Minv in each layer, we let L = 4 and K = 6.
Same as Sec. 3, we adopt the following loss to train our D-SVD:
Training Loss := 1
nKsvd n∑ i=1 Ksvd∑ k=1 ( 1 2 ‖Λi,k −Diag (Λi,k)‖2F − 〈 Mi,Ui,kV > i,k 〉) ,
where Ui,k and Vi,k are the k-th layer’s output of D-SVD, and Λi,k := U>i,kMiVi,k. The reason that we introduce the regularization term ∥∥∥U>i,kMiVi,k −Diag (U>i,kMiVi,k)∥∥∥2
F is the nonuniqueness. In general, any pair (U,V) ∈ { (ŨR, ṼR) : M = ŨS̃Ṽ>, R>R = RR> = I } can
maximize the function (〈 M,UV> 〉) . However, in order to obtain the exact singular vectors of M,
the matrix ( U>i,kMiVi,k ) should have zero off-diagonal entries. Thus, the regularization term is necessary for the training of D-SVD.
We use the following test loss to report the results:
Test Loss := 1
n n∑ i=1 ( 1 2 ‖Λi,Ksvd −Diag (Λi,Ksvd)‖ 2 F − 〈 Mi,Ui,KsvdV > i,Ksvd 〉) ,
where Ui,Ksvd and Vi,Ksvd is the final layer’s output of D-SVD, and Λi,Ksvd := U > i,Ksvd MiVi,Ksvd . Note that we also utilize the following quantity to measure the test performance.
SVD MSE := 1
n n∑ i=1 ∥∥Sort (Diag (U>i,KsvdMiVi,Ksvd))− SVD (Mi)∥∥2F , where Sort(·) is the operator that sort the input elements in the descending order, Diag(·) returns the collection of the entries at the diagonal of the input matrix, and SVD(·) gives out the singular values.
For convenience, we add the mean of the sum of SVD(Mi) to the loss in Table 3 to make the loss non-negative. Although increasingKsvd also can improve the performance in the non-learning case,
5We replace the GD part by SGD for the generality of the experiment.
learning can strengthen the role of Ksvd. Without training, the test loss for Ksvd = 60 is one eighth of the loss for Ksvd = 1; in sharp contrast, the reduction can reach one thousandth after training. Due to the introduced regularizer for the training loss, we can obtain a small SVD MSE rather than the small difference between the sum of singular values.
During the training of LD-SVD, we observe that our results are robust to the setting of K and L. In general, the forward process can converge even with very inexact matrix inverse (Li et al., 2019). Moreover, our introduced learnable step size { t̃Uk , t̃Vk } further provides the freedom to adjust the update direction for Uk and Vk. Thus, for efficiency, a relative small K and L is enough for each layer’s LD-Minv.
A.1.3 PLUG-AND-PLAY: DIFFERENTIABLE NON-BLIND DECONVOLUTION
Non-blind deconvolution (NbD) aims to restore the latent image z from corrupted observation y with known blur kernel b. In this experiment, we consider a well-known sparse coding formulation:
y = b⊗ z + n = BW>x + n,
where ⊗ is the convolution operator, x and n are the sparse code and unknown noises, respectively. B is the matrix form of kernel b, and W> is the inverse of the wavelet transform W (i.e., x = Wz and z = W>x). We utilize the following optimization to perform the non-blind deconvolution:
min x,z
g(x, z) := 1
2 ‖y −Bz‖22 + λ‖x‖1, s.t. z = W >x,
where λ > 0 is the balance parameter. A usual way to solve this problem is linearized ADMM (L-ADMM) (Lin et al., 2011), read as: ak = zk − αk ( λk + β ( zk −W>xk )) , zk+1 = ( αkBB > + I )−1( ak + αkB >y ) , bk = xk − γkW> ( λk + β ( zk+1 −W>xk )) , xk+1 = sgn (bk) max ( bk − γk λ , 0 ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(13) where αk > 0, γk > 0 and β > 0 are penalty parameters, is the Hadamard product, and λ is the Lagrange multiplier.
Differentiable NbD. Inspired by the Eq. (13), we can easily obtain the learning-based non-blind deconvolution method: ak = zk − αk Ĩk ( λk + β ( zk −W>xk )) , Tk = LD-Minv ( αkBB > + I,Ck ) zk+1 = Tk ( ak + αkB >y ) , bk = xk − γkW̃k ( λk + β ( zk+1 −W>xk )) , xk+1 = ReLU ( bk − γk λ ) − ReLU ( −bk − γk λ ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(14) where SNbD := {Ĩk,W̃k, αk, γk, β}KNbDk=1 is the collection of all learnable parameters, Ĩk and W̃k are introduced learnable matrices (in the implementation, we choose them as the convolution layer
Algorithm 4 Joint Training for D-NbD and LD-Minv Input: Training data {yi}ni=1, number of iterations TNbD and Tinv, step sizes ηNbD and ηinv.
1: Set xi,0 = yi, zi,0 = W>xi,0 and λ0 = 0. 2: for t = 1, · · · , Tsvd do 3: Fix the learnable parameters S(t)NbD and train each layer’s LD-Minvs by Algorithm 1. 4: Update S(t+1)NbD = S
(t) NbD − ηNbD · D[S(t)NbD=SNbD] Qn, where Qn is presented in Eq.(15). 5: end for
Output: {zi,k}n,KNbDi=1,k=1.
with compatible size), Ck is the learnable parameters for each layer’s LD-Minv, and ReLU(a) = max{a, 0} is the Rectified Linear Unit (ReLU) function. Note that
sgn (b) max (b− γ, 0) = ReLU (b− γ)− ReLU (−b− γ).
Eq. (14) can be implemented by a DNN. Given the training images {yi}ni=1, we adopt the following loss to train our D-NbD:
Qn(SNbD) := 1
nKNbD n∑ i=1 KNbD∑ k=1 ( g(xi,k,W >xi,k) ) , (15)
where xi,k is the k-th layer’s output of D-NbD. The training process for D-NbD is given in Algorithm 4.
Settings. The training and test images come from Schmidt & Roth (2014), in which 300 (random selected) have been used for training and the others are used for test. We add different levels of the Gaussian noise to generate our corrupted observations {yi}ni=1. We set the patch size as 160× 160, and let the batch size be 15. Finally, Adam is adopted and executed 1000 epochs for learning rate ranging from 1e-4 to 1e-6 (it is reduced by half every 140 iterations). The observations {yi} are corrupted by blur kernel with size ranging from 17× 17 to 57× 57 and the levels of Gaussian noise varying from 1% to 3%.
Results. From Table 4, we find that fL-ADMM is much more efficient than aL-ADMM. fL-ADMM does not need to perform the inverse balbala since αk, γk are fixed, which however is also why fLADMM performs poorly. It can not find the universal αk, γk for all test images. aL-ADMM can perform better with adaptive step size, but it needs exact Minv in each iteration. Compared to them, D-NbD outperforms L-ADMM by a large margin in terms of speed and performance. Note that, for one iteration, the computation complexity of fL-ADMM is much lower than the other three since it only involves the MatMul. However, it consumes more than 600 iterations to obtain a relatively good solution. In fact, just comparing the Minv time in one iteration for aL-ADMM and D-NbD, our methods’ time is only almost half of that of exact Minv in PyTorch. Fortunately, the learnability of DNbD helps us obtain a better or equally good solution using one order-of-magnitude fewer iterations. Hence, the total time can only be one-tenth of that of aL-ADMM. Moreover, the learnability of LDMinv further introduces more flexibility, which allows D-NbD(LD-Minv) to achieve better results within fewer layers. Thus, D-NbD(LD-Minv) is more efficient due to less calculation.
A.2 DISCUSSION
Not only D-NbD, based on the LD-Minv and D-SVD, but there are also rich potential applications, including image recovery or denoising, numerical algorithm acceleration, blind deconvolution, sparse coding, subspace clustering, etc. All these compelling applications may be beyond this paper’s scope, and we leave them as future work. Moreover, due to the differentiability, D-SVD can help us design more abundant singular-value-related training loss for DNN, which is impossible in the previous non-differentiable situation.
B NOTATION
Positive definite matrix A is denoted by A 0. Transpose of the matrix A ∈ Rm×n is A>. We denote by A−1 and A† the matrix inverse and Moore–Penrose inverse, respectively. The Euclidean inner product between two matrices A ∈ Rm×n and B ∈ Rm×n is defined as 〈A,B〉 := Tr ( A>B ) , where Tr(·) is the trace of a matrix. κ(A) := ‖A†‖‖A‖ is the generalized condition number of a matrix, where ‖·‖ is the spectral norm. We denote by σ(A) the singular values vectors (no need to be ordered) of the matrix A. Similarly, we denote by σ+(A) the positive singular values vectors.
Let ‖·‖F be Frobenius norm. Given a differentiable function the first and second derivative of F at Y are denoted by: linear operator D[X=Y]F(·) : Rm → Rn := ( ∂Fi(Y) ∂Xj ) (·) and quadratic
operator D2[X=Y]F(·, ·) : R m × Rm → Rn := ( ∂2Fi(Y) ∂Xj∂Xk ) (·, ·), respectively, where Xi is the i-th entry of X. We also denote by F−1 the inverse of the function F and let the composition map f ◦ g(x) := f(g(x)). is the Hadamard product and [K] = {1, 2, · · · ,K}.
C ADDITIONAL THEORETICAL RESULTS
In this section, we provide the additional theoretical results of D-Minv and D-SVD. We start by a warm-up case for D-Minv.
C.1 RESULTS FOR LD-MINV
C.1.1 WARM-UP: ONE PARAMETER FOR ONE LAYER
As a warm-up case, we consider only to learn one coefficient for each layer, as a special case of learnable D-Minv:
Xk+1 = Xk ( L−1∑ l=0 Elk + CkE L k ) , Ek = I−AXk, (16)
where Ck ∈ R are the learnable coefficients. By setting X0 = 1c0 A >, where c0 > ‖A‖2, and let Ck = 0 for all k, by Lemma 1, we know that ∥∥AA† −AXk∥∥ = eLk0 . One important question is if we let the coefficient of ELk to be learnable, by training, will D-Minv obtain a faster convergence rate?
In general, previous works show that learning-based methods share the same convergence order with their fixed version but with a better statistical constant. Interestingly, beyond the constant, we found that D-Minv can improve the order of convergence by learning. Lemma 2 (Learnability Matters). Assume these is only one training data A. Provided that X0 = 1 c0 A>, where c0 > ‖A‖2, then we have:
ek+1 = (1− Ck) · eLk + Ck · eL+1k , where ek = ∥∥AA† −AXk∥∥, and k ∈ [K], L ≥ 4.
Moreover, the gradient of L w.r.t. Ck is negative, i.e., D[Ck]L < 0 for all Ck ∈ [0, 1 + ], where > 0 depends on ek−1. Additionally, D[Ck]L ≈ 0 for k ∈ [K − 1] when Ck ∈ [1− , 1 + ].
Starting from 0, according to the gradient’s negativeness, the coefficients Ck for k ∈ [K − 1] will converge to the range [1 − , 1 + ] when adopting a proper step size for GD. In this case, i.e., Ck
are around 1 for all k, the sequence {Xk} converges to the Moore–Penrose inverse in L̃-th order, where L < L̃ ≤ L+ 1. In other words, by learning from one data, D-Minv finds a way to improve the order of convergence for any input. Remarkably, Ck = 1 for all k is not a global or even local minimum for the training loss L. For the learnable coefficient CK at the last layer, satisfying its first order condition usually implies obtaining the exact solution, i.e., eK+1 = 0. In summary, not only improve the convergence speed, learnable D-Minv makes the exact fitting come true.
C.1.2 DISCUSSION FOR ASSUMPTION 1
In general, the bounded assumption for the singular values the is weak and easy to satisfy. For example, singular values of the matrices with i.i.d. zero mean and unit variance entries asymptotically obey the Quadrant Law (Dozier & Silverstein, 2007; Shen, 2001), i.e., σ ∼ 1π √ 4− σ2, σ ∈ [0, 2], which implies the singular values have an upper bound. On the other hand, a well-known fact from the random matrix theory is a zero mean and unit variance random matrix is non-singular with high probability. Even for the square matrix A, with probability at least 1 − δ, we still we have σmin(A) ≥ δd−1/2 (Rudelson & Vershynin, 2008), where σmin(A) is the smallest singular value of the matrix A.
C.1.3 LIPSCHITZ SMOOTHNESS BEFORE GENERALIZATION
We would also like to remark that our generalization result can easily be extended to the stochastic case. The proof is the same as the Lemma 4.3 in Ji & Telgarsky (2019) or Theorem 3.3 in Cao & Gu (2019).
In general, smoothness plays an important role in the generalization analysis. The covering number of a smooth function usually is small. We start by showing the local smoothness of learnable DMinv.
Proposition 2 (Local Lipschitz smoothness). Define a set of coefficients collection: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
Let PA = AA†. Given any coefficients collection C ∈ C, we denote by Gk(·) the map from PAEk to PAEk+1 in (3), i.e., Gk(E) := PA−PA(I−E) (∑L−1 l=0 C{k,l}E l ) , and letHk(·) := R◦GK−1◦
· · · ◦ Gk(·), whereR(E) := ∑K k̂=k ∥∥Wk̂Ek̂A∥∥2F . Suppose ‖A‖2 ≤ 1 and ∀k ‖Wk‖ = O(1), then we have: ∣∣∣Hk(Ê)−Hk(Ẽ)∣∣∣ ≤ 2∥∥∥PAÊ−PAẼ∥∥∥
∗ , ∀k ∈ [K],
where Ê and Ẽ are with the same size and same norm upper bound as Ek, d is the dimension of positive singular value vector σ+(A), and ‖·‖∗ is the matrix nuclear norm.
In general, the composition of two Lipschitz continuous functions leads to a worse Lipschitz constant. However, our LD-Minv shares a consistent constant for all layers. This is important since the layer-wisely local smoothness usually implies a small covering number of the whole DMinv (Bartlett et al., 2017; Wei & Ma, 2019), which results in a tight generalization bound.
C.2 RESULTS FOR D-SVD
We characterize the generalization performance of D-SVD trained by GD in this theorem. First of all, we present several definitions. Let
Mn(t̃,C) := 1
nKsvd n∑ i=1 Ksvd∑ k=1 〈 Mi,Ui,kV > i,k 〉 ,
where (t̃,C) are the collection of the parameters of D-SVD and the LD-Minvs in each layer. Denote by f(U | V,M, t̃k,Ck) the operations in Eq. (12) that map Uk to Uk+1, i.e.,
Uk+1 := f(Uk | Vk,M, t̃k,Ck), (17)
where (t̃k,Ck) ∈ R×RLKinv are the learnable parameters of D-SVD and LD-Minv at the k-th layer of D-SVD. We define the coefficients set:
Ck := [ −εu
2 ,+ εu 2
] × { C : ∥∥∥C−C(0)∥∥∥
F = Õ ( 1√ Kinv )} ,
where εu := min
i
1
‖Pi‖ , in which Pi = MiVkU>k −UkV>M>i , ∀i ∈ [n],
Proposition 3 (Covering Number of Single Layer). Denote F the class of function in Eq. (17), i.e., Fk := { f(Uk | Vk,M, t̃k,Ck) : ( t̃k,Ck ) ∈ Ck } .
Then we have: lnN (Fk, , ‖·‖)) = O ( LKinv ln ( 1
√ Kinv
) + ln ( ‖M‖ )) .
Theorem 3 (Generalization Bound for D-SVD). Denote by PM the distribution of Mi and let d := max{m, n̂}. Suppose that ‖Mi‖ ≤ 1, ∀i ∈ [n] and Kinv = Ω ( max{nd,K2svd} ) , then with probability at least 1− δ, the iterate t̃(t) of Algorithm 3 holds that:
E M∼PM
[ Mn ( t̃(t) )] ≤Mn ( t̃(t) ) + Õ (√ K3svdL
n
) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , Tsvd, where EM∼PM [Mn] is the expected value of Mn under the probability measure PM and Õ(·) hides log factors.
D OMITTED PROOF FOR FIXED D-MINV IN SECTION 2
D.1 PROOF OF LEMMA 1
We first verify that e0 < 1, suppose rank(A) = r. e0 = ∥∥AA† −AX0∥∥ = ∥∥∥∥UU> − 1cUΣ2U> ∥∥∥∥ = ∥∥∥∥Ir − 1cΣ2 ∥∥∥∥ < 1,
where A = UΣV>, Ir ∈ Rr×r is the identity matrix, and the last inequality comes from the fact c > 12‖A‖ 2. Form Eq. (2), we have
AA† −AXk+1 = AA†(I−AXk+1) = AA† ( I−AXk ( L−1∑ l=0 Elk ))
=AA† ( I− (I−Ek) ( L−1∑ l=0 Elk )) = AA† ( I− L−1∑ l=0 Elk + L−1∑ l=0 El+1k ) =AA† ( ELk ) = AA†(I−AXk)L = ( AA† −AXk )L ,
i.e., ∥∥AA† −AXk∥∥ = ∥∥AA† −AXk−1∥∥L = eLk0 . We finish the proof.
D.2 PROOF OF PROPOSITION 1
Note that for any matrix X, the matrix polynomial of X share the same column and row space with X. In general, the iterative equation (2) does not change the column and row space from the begining. We finish the proof by induction. The conclusion is obvious for X0 due to X0 = 1c0 A
>. We assume the proposition holds for Xk, then:
A†AXk+1 = A †AXk ( L−1∑ l=0 Elk ) = Xk ( L−1∑ l=0 Elk ) = Xk+1,
and
Xk+1AA † = Xk ( L−1∑ l=0 Elk ) AA† = Xk ( L−1∑ l=0 (I−AXk)l ) AA†
=XkAA † ( L−1∑ l=0 (I−AXk)l ) = Xk+1,
Hence, we can conclude that: A†AXk = Xk, XkAA
†, ∀k. We finish the proof.
E OMITTED PROOF FOR LEARNABLE D-MINV IN SECTIONS 4 AND C
In the theoretical part, we consider a more general training regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥V{k,i}(AiX{k,i}Ai −Ai)∥∥2F , (18) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters, V{k,i} is a diagonal matrix whose diagonal entries are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v). Note that V{k,i}’s are fixed during training and testing. Introducing V{k,i} has two advantages. One is re-weighting the error term. As we can see from the fixed D-Minv, the error term decays exponentially. In that way, the smallest positive singular value dominate the gradient flow. However, for LD-Minv, we hope that the gradient flow comes from all positive singular values’ loss during training. The random re-weighting strategy is a good choice for numerical calculation when the order of singular values is unknown. Another advantage is that the introduced V{k,i}s, with very high probability (see Claim 1 in Sec. E.2), make the landscape of the training loss locally strongly convex, which is an excellent benefit from the optimization perspective.
Notably, by setting the diagonal entries of V{k,i} as the independent symmetric Bernoulli random variables, the training loss here in Eq. (18) will degenerate into the loss given in Eq. (4). Hence, all the theoretical results in this section proven for Eq. (18) also hold for Eq. (4).
It is worth mentioning that the weight matrices {V{k,i}} here has nothing to do with the learned singular vectors {Vi,k} for D-SVD. Please distinguish them.
E.1 GENERAL ANALYSIS
Before start the proof, we first make some general analysis. It is easy to see that Proposition 1 also holds for learnable D-Minv. Thus, we can observe that:
XkE l k = XkAA †Elk, ∀(k, l). Hence, we obtain an equivalent form of Eq. (3):
Xk+1 = Xk ( L−1∑ l=0 C{k,l}Ẽ l k ) , Ẽk = AA † −AXk, L ≥ 4, (19)
Due to the invariance of the row and column space, we can only concern on the singular values vectors and rewrite Eq. (19) as :
xk+1 = xk ( L−1∑ l=0 C{k,l}e l k ) , ek = 1− a xk, L ≥ 4, (20)
where 1 ∈ Rd is the all-one vector (d is the dimension of the positive singular vector σ+(A)), and:
a = σ+(A), xk = σ+(Xk), ek = σ+ ( Ẽk ) , elk = ek · · · ek︸ ︷︷ ︸
l
; (21)
recall that σ(A) the singular values vectors (no need to
Remark 4. We only utilize Eq. (19), the equivalent form of Eq. (3), in this section for the convenience of theoretical analysis. For implementation, we still use Eq. (3) and do NOT calculate the pseudo inverse A† throughout learning.
Now we continue the derivation:
ek+1 = 1− a xk+1 = 1− a xk ( L−1∑ l=0 C{k,l}e l k )
= 1− (1− ek) ( L−1∑ l=0 C{k,l}e l k ) = 1− ( L−1∑ l=0 C{k,l}e l k − L−1∑ l=0 C{k,l}e l+1 k )
= eLk + (1− Ck,0)1 + ( L−1∑ l=1 ( C{k,l−1} − C{k,l} ) elk ) + ( C{k,L−1} − 1 ) eLk
= eLk + Êkck,
(22)
where e0k = 1, Êk ∈ Rd×(L+1) := [ e0k; e 1 k; · · · ; eLk ] , (23) and
ck ∈ RL+1 := [ 1− Ck,0, C{k,0} − C{k,1}, · · · , C{k,L−2} − C{k,L−1}, C{k,L−1} − 1 ]> . (24)
We define:
`(C,A) := 1
2 K∑ k=1 ‖Vk(AXkA−A)‖2F = 1 2 ∥∥∥∥∥ K∑ k=1 vj ( a2 xk − a )∥∥∥∥∥ 2
2
= 1
2 K∑ j=1 ‖vk a ek‖22,
(25) where C = {C{k,l}, (k, l+1) ∈ [K]× [L]} is the collection of all learnable parameters for D-Minv, Vk is a diagonal matrix whose diagonal entries vk = diag (Vk) are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v), and ‖·‖2 is the vector `2 norm; recall the definitions of a, ek and xk from Eq. (21). Note that vk, ∀k is fixed during training. From Lemma 1, we know that
∥∥AA† −AXk∥∥ decays extremely fast. We can run fixed D-Minv for several times before training learnable D-Minv. In this case, X0 is the output of the K̃-layer fixed D-Minv. Hence, without loss of generality (w.l.o.g.), we suppose the following assumption holds in this section.
Assumption 2 (Well-Bounded E0). Assume ∥∥∥Ẽ0∥∥∥ = ‖e0‖∞ ≤ 12 .
We first provide several propositions of learnable D-Minv. Proposition 4 (Upper Bound for Perturbation). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l + 1) ∈ [K]× [L], then we have: ∥∥∥Ẽk∥∥∥ = ‖ek‖∞ ≤ eLk0 + 178 δ, ∀k ∈ [K]. Proof. We first show that ‖ek‖∞ < 1 2 for all k by induction. By Eq. (22), for k = 0, we know that:
‖e1‖∞ ≤ ‖e0‖ L ∞ + ∥∥∥Ê0c0∥∥∥ ∞ ≤ ‖e0‖L∞ + max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Ê0] i,j ∣∣∣∣ ‖c0‖∞ ≤ ‖e0‖L∞ + 2δ < 12 .
Now, we assume ‖ek‖∞ < 1 2 holds. Then,
‖ek+1‖∞ ≤ ‖ek‖ L ∞+ ∥∥∥Êkck∥∥∥ ∞ ≤ ‖ek‖L∞+ max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Êk] i,j ∣∣∣∣ ‖ck‖∞ ≤ ‖ek‖L∞+2δ < 12 .
Since δ < 18 , we have 2δ < 1 4 . By Lemma 3, we can conclude that:
‖ek‖∞ ≤ e Lk 0 + (1 + )2δ, where 1
16 > → 0 as k →∞.
We finish the proof.
Proposition 5 (Upper Bound First-Order Derivative). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l+ 1) ∈ [K]× [L] and ‖a‖∞ ≤ 1, then we have:
∂`(C,A) ∂C{k,l} ≤ 4 ( 1 3 )l b̄2vd.
Proof. Note that:
∂`(C,A)
∂C{k,l} = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂C{k,l}
〉 .
By Eq. (22), we also have: ∂ek+1 ∂ek = Diag ( LeL−1k + Ê ′ kck ) ,
where Diag(e) is a diagonal matrix with the diagonal entries being e, and: Ê′k ∈ Rd×(L+1) := [ 0; e0k; · · · ;LeL−1k ] ;
here 0 ∈ Rd is the all zero vector. Hence, it holds that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ = ∥∥∥LeL−1k + Ê′kck∥∥∥∞ ≤ LeL−1k + ek(1− ek)2 · δ, (26)
where ek = ‖ek‖∞. By Proposition 4, we know that ek ≤ eL k 0 + 17 8 δ < 1 3 . Thus, we have:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 . (27) It is easy to see:
∂`(C,A)
∂ek = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂ek
〉 .
Hence, we have∥∥∥∥∂`(C,A)∂ek̂ ∥∥∥∥ 2 ≤ K∑ k̂=k (∥∥vk̂ a (vk̂ a ek̂)∥∥2) · 4k−k̂ < 2b̄2v√d. According to Eq. (22), we notice:∥∥∥∥ ∂ek+1∂C{k,l} ∥∥∥∥ 2 = ∥∥el+1k − elk∥∥2 ≤ 2∥∥elk∥∥2.
Combing all things together, we conclude that:∣∣∣∣∂`(C,A)∂C{k,l} ∣∣∣∣ ≤ K∑
k̂=k
4k−k̂ ( 4b̄2v √ d ∥∥elk∥∥2) ≤ 4(13 )l b̄2vd.
We finish the proof.
Proposition 6 (Second Order Approximation). Define a set of coefficient collection around 1: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
For any C1,C2 ∈ C∗, we have:
`(C1,A) = `(C2,A) +
〈 ∂`(C2,A)
∂C ,C1 −C2
〉 +O ( b̄2vdKL ) ‖C1 −C2‖22.
Proof. Note that for any C ∈ C∗, we have: ∂2`(C,A)
∂C{k,l}∂C{k′,l′} =
〈 ∂`(C,A)
∂eK , ∂2eK ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉 = 〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ ∂ 2eK−1 ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂`(C,A)
∂eK , ∂2eK ∂eK−1∂eK−1 ( ∂eK−1 ∂C{k,l} , ∂eK−1 ∂C{k′,l′} )〉 +
〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉
= K∑ k̂=max{k,k′}
〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ · · · ◦ ∂2ek̂+1 ∂ek̂∂ek̂
( ∂ek̂
∂C{k,l} , ∂ek̂ ∂C{k′,l′}
)〉 + ∆(k, k′, l, l′),
where we let ek+1 := `(C,A) with a little abuse of notation, and:
∆(k, k′, l, l′) = 0 if (k, l) = (k′, l′),〈
∂`(C,A) ∂eK , ∂eK∂eK−1 ◦ · · · ◦ ∂(el+1k −e l k) ∂C{k′,l′}
〉 otherwise.
Hence we can get the upper bound for the entry ∂ 2`(C,A)
∂C{k,l}∂C{k′,l′} . From the proof in Proposition 5,
we know that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 , ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 1 ≤ d ( 1 4 )k̂−k+l , and ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 2 ≤ 2 √ d ( 1 4 )k̂−k+l .
Note that: ∂2ek+1 ∂ek∂ek = TDiag | 1. What is the main contribution of the paper regarding matrix inversion?
2. What are the strengths and weaknesses of the proposed iterative method for matrix inversion?
3. How does the reviewer assess the novelty and efficiency of the proposed method compared to existing literature?
4. What are the minor comments and questions raised by the reviewer regarding the paper's content and presentation? | Review | Review
Summary
The authors propose an iterative method for matrix inversion. Given a large dataset of matrices that should be inverted, the authors then propose to learn some parameters of the iterative algorithm, in order to have faster matrix inversion. Since this iterative method is polynomial, it allows to differentiate the matrix inversion. The authors then use their method to differentiate Riemannian gradient descent on the Stiefel manifold with the Cayley transform, which is finally used to derive a differentiable singular value decomposition.
Major comments
The article is cluttered, and is full of small but annoying inconsistencies (see minor comments). I found this article much harder to understand than it should be.
The main selling point of the authors is that matrix inversion is supposedly not differentiable, but this is not true: the differential of
X
−
1
is
X
−
1
d
X
X
−
1
, which is not especially more costly to compute than the inverse itself (it only requires two more matrix multiplications). This operation is implemented efficiently e.g. in Pytorch and Tensorflow.
The authors claim that their algorithms are efficient, but they do not provide numerical experiments comparing their method to the very wide an well established literature about numerical matrix inversion.
The authors do not provide code for their novel algorithm, so I reimplemented their method for matrix inversion. On my computer, using pytorch, with matrices of size 200 x 200, with L = 4, one iteration of D-Minv (k=1) takes about as much time as doing torch.inverse(A), so even with the best training possible I do not see how this can be competitive with usual linear algebra algorithms.
The addition of the training parameters in D-Minv to make it learnable seems ad-hoc. In figure 1, it seems that there is barely any gain in learning these coefficients. (In contrast e.g. with the method of Gregor & Lecun for unfolded sparse coding, where we usually see a ~ tenfold acceleration)
Minor comments
It would be worthwhile to provide pseudo code implementation of the D-Minv algorithm
Is there a particular reason to assume L >=4?
The article starts by assuming that the matrix
A
is square, and then in lemma 1 the matrices suddenly become non-square, and the notion of matrix inverse becomes matrix pseudo-inverse..
In lemma 1, the authors show that
|
A
A
†
−
A
X
|
goes to
0
. This result, by itself, does not show that
X
converges to
A
†
, as the subsequent result in prop.1 is needed to show it. Further, what we are really interested in is
|
A
†
−
X
|
, for which the authors do not provide a bound.
In the beginning of Sec. 2.2, the authors mention that
|
⋅
|
F
is the Frobenius norm, but it is not used anywhere near this sentence.
The statement about the exponentially fast convergence of
|
X
K
−
A
†
|
to
0
is not precise.
The authors do not provide a practical example where it would be useful to learn how to invert some matrices.
After discussion 1., the authors mention the non-convexity of the problem, but this is before the authors even introduce the corresponding optimization problem.
The cost function (4) sums over
k
, why not only optimize w.r.t. The output of the algorithm?
The appendix is a collection of diverse results, figures, and notation, and is very hard to read. |
ICLR | Title
Fast and Differentiable Matrix Inverse and Its Extension to SVD
Abstract
Matrix inverse (Minv) and singular value decomposition (SVD) are among the most widely used matrix operations in massive data analysis, machine learning, and statistics. Although well-studied, they still encounter difficulties in practical use due to inefficiency and non-differentiability. In this paper, we aim at solving efficiency and differentiability issues through learning-based methods. First of all, to perform matrix inverse, we provide a differentiable yet efficient way, named LD-Minv, which is a learnable deep neural network (DNN) with each layer being an L-th order matrix polynomial. We show that, with proper initialization, the difference between LD-Minv’s output and exact pseudo-inverse is in the order O ( exp { −L }) , where K is the depth of the LD-Minv. Moreover, by learning from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
N/A
}) , where K is the depth of the LD-Minv. Moreover, by learning
from data, LD-Minv further reduces the difference between the output and the exact pseudo-inverse. We prove that gradient descent finds an -error minimum within O(nKL log 1/ ) steps for LD-Minv, where n is the data size. At last, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings. As an application of LD-Minv, we provide a learning-based optimization method to solve the problem with orthogonality constraints and utilize it to differentiate SVD (D-SVD). We also provide a theoretical generalization guarantee for D-SVD. Finally, we demonstrate the superiority of our methods on the synthetic and real data in the supplementary materials.
1 INTRODUCTION
Matrix inverse (including matrix pseudo-inverse) and singular value decomposition are fundamental linear algebra operations, ubiquitous in machine learning, statistics, signal processing, and other fields. In general, solving scientific computing or optimization problems often need to perform these two operators, such as matrix (pseudo) inverse for least squares regression, singular value decomposition (SVD) for dimensionality reduction (PCA), the low-rank related problems (Liu et al., 2010; Zhang et al., 2018; Liu et al., 2013), the graph-based issues (Wu et al., 2020a; Von Luxburg, 2007), and even for training deep neural networks (DNNs) with structured layers (Ionescu et al., 2015). Nevertheless, Minv and SVD appear less and less often in the modern machine learning tasks. One reason is inefficiency. Computing the SVD and the Minv can be extremely time-consuming for large-scale problems; however, efficiency is a significant concern in the current big data and deep learning era. Besides, non-differentiability is considered another reason that blocks the use of SVD and Minv. Usually, most prevalent methods for training DNNs are first-order and are based on the backpropagation; however, Minv and SVD are not necessarily continuous functions of the matrix entries (Stewart, 1969). Therefore, derivatives are not always existent. Although Minv and SVD may be backprop-able due to some specific implementation, they are unstable and are essentially nondifferentiable. Thus the gradients cannot pass through them when backpropagating. Considering the above problems, one natural question emerges.
Does there exist an efficient and differentiable way to perform Minv and SVD?
Over the last decade, many sketches-based methods have been developed, e.g., Nelson & Nguyen (2013); Meng & Mahoney (2013); Cohen et al. (2015); Indyk et al. (2019). The main idea of the sketches-based techniques is to use random projections, which is efficiently computable, to reduce the problem size before performing SVD and Minv. However, they do not solve the problem of non-differentiability as smaller-sized SVD and Minv still needs to be computed.
Recently, “differentiable learning-based” methods (D-LbM) have attracted considerable attention (Chen et al., 2018; Indyk et al., 2019; Liu et al., 2019; Wu et al., 2020b; Xie et al., 2019). These methods usually unroll the classical optimization or numerical iterative algorithms and introduce the learnable parameters to obtain a learnable DNN. In general, the iterative algorithms inspired DNNs consist of differentiable operators such as matrix polynomials. Benefiting from training on the data, D-LbMs can execute in much fewer iterations with similar per-iteration cost as original algorithms but obtain much better performance. Many empirical results, e.g., Gregor & LeCun (2010); Yang et al. (2016); Peng et al. (2018), show that a well-trained DNN provided by D-LbM—compared with the original optimization algorithm—can obtain an almost equally good solution using one or even two order-of-magnitude fewer iterations. Based on these observations, we aim to find a learning-based iterative way to differentiate Minv and SVD in this paper.
However, all these D-LbMs suffer from two common problems. One is the convergence guarantee. The forward process of D-LbMs may diverge, even if the original unrolled iterative algorithm has a well-behaved convergence guarantee. In fact, most D-LbM methods have little or no convergence guarantees. To the best of our knowledge, there is no work to reveal the behavior of D-LbM during training. How does the training loss decrease? How big is the gap between the output of D-LbM and that of the original algorithm? All these questions are still open. Another problem is, both for D-LbM and the original algorithm, a limited theory exists when the input data obey a common distribution (e.g., data drawn from a low-dimensional manifold). Essentially, by learning from data, D-LbMs can obtain a problem-dependent parameter bias. It can help D-LbMs get the better performance on a specific data distribution within much fewer computation cost, instead of simply fixing the parameter ahead of time like traditional iterative methods. But there is no mathematical result to describe this phenomenon strictly. Moreover, it is unknown whether the trained D-LbMs generalize well on data from the same distribution.
Remarkably, in this paper we provide a learnable, differentiable and efficient way to perform Minv, named Learnable Differentiable Matrix Inverse (LD-Minv) and solve the above two problems by our proposed LD-Minv. First of all, LD-Minv is a DNN with each layer being an L-th order matrix polynomial; and the coefficient of each power is learnable. We show that LD-Minv converges to the Minv or pseudo-inverse with order L if the coefficients of polynomial are set properly. Namely, the difference between the output of the DNN and the exact pseudo-inverse is in the order O ( exp { −LK }) , where K is the depth of LD-Minv. Secondly, by learning from data, LD-Minv can further improve the and precision on some specific data distribution. Namely, the distance between the output and the exact pseudo-inverse can be arbitrarily small. Specifically, the training loss converges to zero exponentially fast, i.e., gradient descent (GD) finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations, where n is the data size. Finally, we provide the generalization bound for LD-Minv in both under-parameterized and over-parameterized settings.
With LD-Minv at hand, as a direct application, we propose a learning-based optimization to solve any convex problems with non-convex orthogonality constraints. Then we use it to differentiate SVD. Note that we also provide a generalization guarantee for our D-SVD. Unlike the previous work on differentiable SVD (Indyk et al., 2019), which is based on the power method and needs an assumption on the gap between singular values to ensure convergence, our method can converge to the solution without any gap assumption. In summary, our main contributions include:
• We propose a differentiable method, LD-Minv, to perform matrix inverse efficiently. LDMinv, inspiring a DNN with each layer being a learnable L-th order matrix polynomial, would be useful for accelerating and differentiating general matrix decompositions. We show that LD-Minv can converge to the matrix pseudo-inverse with order L.
• By learning, LD-Minv can further improve the approximation performance on some underlying data distribution. We prove that GD finds an -error minimum in `2 regression using at most O(nKL log 1/ ) iterations under mild conditions. Moreover, we also provide the generalization bound for LD-Minv. We reveal that the empirical Rademacher complexity of the loss function class is bounded by Õ(min{d3L/K1/2, √ d3K/n}), where Õ(·) hides
log factors, and K and d are the depth and the width of LD-Minv, respectively.
• As a direct application, we further provide a learning-based general framework to solve the problems with orthogonality constraints. This D-LbM helps us to differentiate SVD. Finally, we also provide a generalization guarantee for our D-SVD.
2 DIFFERENTIABLE MATRIX INVERSE
We introduce the proposed D-Minv in this section. We first present an intuitive idea to show how to approximate the matrix inverse iteratively. Then we generalize the iterative method to obtain our fixed and learnable D-Minv, separately
2.1 FIXED DIFFERENTIABLE MATRIX INVERSE
Given a matrix A ∈ Rd×d, we want to approximate its inverse A−1 by the matrix X ∈ Rd×d, i.e., AX ≈ I, where I ∈ Rd×d is the identity matrix. Ideally, X is the fixed point of the following equation:
X = X(AX) −1 = X ( ∞∑ l=0 (I−AX)l ) , (1)
where the last equality follows from the Neumann series when AX ≈ I and (I−AX)l is the l-order matrix polynomial.
Inspired by the above iterative process, it is obvious that we can obtain the matrix X iteratively. Hence, we consider the following higher-order iterative method, which is an L-th order matrix polynomial in each iteration, where L ≥ 4:
Xk+1 = Xk ( L−1∑ l=0 Elk ) , Ek = I−AXk. (2)
Note that DNNs can easily implement this method: one layer corresponds to one iterative step. Since there are no parameters to learn in Eq. (2), and it only involves the matrix multiplication (thus differentiable), we name it fixed D-Minv 1. One may doubt that the computational cost of each iteration is high in Eq. (2). However, as shown in Lemma 1, higher-order polynomial usually implies a faster convergence rate. For obtaining the same precision with different L, the total computational cost is in the order of O(L/ ln(L)), which is a very slow growth rate w.r.t. L. Although simple, fixed D-Minv converges to the inverse A−1 or A† extremely fast. Lemma 1 (Approximation Speed). The generated sequence {Xk ∈ Rd1×d2} by Eq. (2) converges to the Moore–Penrose inverse A† with order L provided that X0 = 1c0 A >, c0 > 12‖A‖ 2, i.e., we can conclude: ∥∥AA† −AXk∥∥ = eLk0 , in which e0 = ∥∥AA† −AX0∥∥ < 1, where ‖·‖ is the spectral norm.
We can see that fixed D-Minv converges with order L, i.e., a shallow D-Minv can approximate the matrix inverse very well. Moreover, provided with the X0 given in Lemma 1, the column space and the row space of the matrix Xk are exactly correct from the beginning. Proposition 1 (Invariant Column and Row Spaces). With the same setting in Lemma 1, for any k ≥ 0, it holds that:
XkAA † = Xk, A †AXk = Xk.
By Proposition 1, it is easy to see that the zero singular values remain unchanged during computing, which indicates that D-Minv works consistently on the full and rank-deficient square and rectangle matrices. Note that this proposition also holds for the upcoming LD-Minv.
2.2 LEARNABLE DIFFERENTIABLE MATRIX INVERSE
We consider the learnable D-Minv (LD-Minv) with the following iterative formula:
Xk+1 = Xk ( L−1∑ l=0 C{k,l}E l k ) , Ek = I−AXk, L ≥ 4, (3)
1Higher-order method Eq. (2) is already well-known in the applied mathematics community in another equivalent form, see Climent et al. (2001); Amat et al. (2003); Li et al. (2011).
where (k, l+ 1) ∈ [K]× [L], C{k,l} ∈ R are learnable coefficients, and ‖·‖F is the Frobenius norm. LD-Minv is also in the category of LbMs that unroll the conventional numerical iteration methods. As discussed in the introduction, two problems, i.e., no convergence analysis for the training procedure and no generalization guarantee for the trained DNN, block the further theoretical exploration of these LbMs. However, in contrast to the previous works, we obtain favorable theoretical results on both of the two problems for our LD-Minv. First, although fixed D-Minv already converges extremely fast, LD-Minv can still perform much better on the training data under mild conditions. Specifically,
∥∥XK −A†∥∥ converges to zero exponentially fast during training, i.e., GD finds an - error minimum in `2 regression using at mostO(nKL log 1/ ) iterations 2. Second, LD-Minv has a tight generalization bound to guarantee the performance on the unseen matrix when it comes from the same distribution as the training instances. We provide the rigorous theoretical results in Sec. 4.
Discussion 1. One can find that the final iterate XK can be written as matrix polynomial of A, and may argue that approximating the Minv by the polynomial is well-studied. Zero training can be obtained when the polynomial’s order is large enough, and the approximation error controls the generalization error. For the sake of distinction, we make three clarifications. First of all, most traditional results only describe the behavior in the worst case, which holds for any input matrix and may easily fail in some extreme cases. However, LD-Minv focuses on a more realistic situation, i.e., the input matrices obey some common distribution, such as the affinity matrices of Facebook users or the sampled CT and MRI images. How to learn from the data and design a more efficient method is still open, not to mention the theoretical generalization guarantee on specific data distribution. Secondly, the traditional polynomial approximation method is susceptible to the polynomial order and is very easy to over-fitting when the order is over-parameterized, say order nd. Without precise data distribution density, it is impossible to choose the proper order to ensure zero training and guarantee the small generalization error, simultaneously. To solve this, we may need some complex and data-driven regularization strategy. In sharp contrast, LD-Minv is robust to the order, and our theoretical results (i.e., zero training and small generalization error) hold well from underparameterized to over-parameterized cases. Thirdly, the exact polynomial fitting can only imply the existence of zero training solution, while our results describe the convergence behavior (i.e., which solution we will choose) . Obviously, there exists a big gap between the convergence of training and the existence of a solution.
Note that we have considered a much easier matrix polynomial: XK = ∑L l=0 ClA
l. Learning the coefficients {Cl}Ll=1 becomes an easy-to-solve convex regression problem unlike the convex one in Eq. (3). Unfortunately, we failed due to some stability issues, e.g, the coefficients in different powers vary significantly which cause the poor generalization performance.
Discussion 2. Most matrix iterative methods suffer from the ill-condition problem, which usually brings instability and makes the analysis and calculation hard to carry on, e.g., for D-Minv, large condition number may significantly slow the convergence speed (see Lemma 1). However, learningbased method can greatly alleviate this ill-condition problem by learning from data. LD-Minv can beat D-Minv easily when the condition number of the input matrix is large and the learned coefficients for LD-Minv are also valid on the unseen data. See the experimental part for more details.
2.3 TRAINING SETTINGS
We now describe some training settings for D-Minv. Denote by {Ai}ni=1 the training set consisting of n samples. Let X{k,i} ∈ Rd1×d2 be the output of the k-th layer on the i-th training sample Ai ∈ Rd2×d1 . We consider a typical regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥AiX{k,i}Ai −Ai∥∥2F , (4) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters. Note that when X = A†, the properties of the Moore–Penrose inverse indicate the equation AXA = A, which can further indicates the equation XAX = X in our invariant space case, see Proposition 1. Therefore, we only use the difference term (AXA−A) in the training loss.
2Please distinguish the two “convergence” here. One describes Xk approaching A† w.r.t. k and L, and the other refers to the behavior of LD-Minv’s loss w.r.t. training iteration.
Algorithm 1 Gradient descent (GD) with proper initialization for LD-Minv Input: Training data {Ai}ni=1, number of iterations T , step size η.
1: Generate each C{k,l} ∈ C such that |C{k,l} − 1| = O(K−α), where α > 14 . 2: Set X{0,i} = 1c0 A > i or X̃i, where c0 > 1 2‖Ai‖
2, X̃i is the output of a fixed D-Minv. 3: for t = 1, · · · , T do 4: Update C(t+1) = C(t) − η · D
[C=C(t)] Ln.
5: end for Output: C(0), . . . ,C(T ).
Thanks to the differentiability of LD-Minv, we adopt GD to minimize the training loss to obtain the proper coefficients C. We present the initialization strategy and training process in Algorithm 1, where the first derivative of differentiable function Ln(C) : RK×L → R at C(t) is denoted by D[C=C(t)]Ln(·) := ( ∂Ln(C(t)) / ∂C ) ∈ RK×L. When K is large, one may notice that our
coefficient is close to the oracle value 1, which may provide a relatively good initial loss L ( C(0) ) . Notably, this does NOT mean that we will treat the learning and initialization as a perturbation of the fixed D-Minv and bound the loss by the good-enough fixed D-Minv’s performance plus the perturbation error. On the contrary, as we will show in theoretical results part, learnability indeed matters, and LD-Minv can obtain better performance than the fixed one by training. The real purpose of this initialization is to take advantage of D-Minv’s local Lipschitz continuity near C = 1, where 1 is the all one vector. The continuity reduces the complexity of optimization and will benefit the generalization of D-Minv.
3 LEARNING-BASED OPTIMIZATION WITH ORTHOGONALITY CONSTRAINTS
Considering the following general orthogonality constrained optimization problem:
min U
f(U), s.t. U>U = I, (5)
where the objective function f(U) : Rm×r → R is differentiable and convex. The usual way to solve Eq. (5) is to perform the manifold GD on the Stiefel manifold, which evolves along the manifold geodesics. Specifically, manifold GD first updates the variable in the manifold tangent space along the objective function’s projected gradient. Then, map the updated variable in the tangent space to a feasible point on the geodesic, and repeat these two steps until convergence (Edelman et al., 1998). Usually, the mapping step is non-differentiable and inefficient. Fortunately, the work (Wen & Yin, 2013; Ren & Lin, 2013) develops a technique to solve the optimization problem with orthogonal constrains approximately, which only involves matrix multiplication and inverse.
We let U ∈ Rm×r, where r is the Stiefel manifold’s dimension. Denote by G ∈ Rm×r the gradient of the objective function f(U) in Eq. (5) w.r.t. U at Uk, then the projection of G in the tangent space of the Stiefel manifold at Uk is PUk3, where P = GU>k −UkG> and P ∈ Rm×m. Instead of parameterizing the geodesic of the Stiefel manifold along direction P using the exponential function, we generate the feasible points to update Uk by the following Cayley transform (Wen & Yin, 2013):
Uk+1 = U(t) = C(t)Uk, where C(t) = ( I + t
2 P
)−1( I− t
2 P
) , (6)
where I is the identity matrix and t ∈ R is the step size used for updating the current Uk. In other words, U(t) is a re-parameterized local geodesic w.r.t. t on the Stiefel manifold. One can easily verify that U(t) has the following properties, given U>k Uk = I: (1) ddtU(0) = −PUk, (2) U(t) is smooth in t, (3) U(0) = Uk, (4) U (t) > U (t) = I, ∀t ∈ R. It is evident that, if t is in a proper range, Uk+1 can lead to a lower objective function value than U(0) = Uk on the Stiefel manifold. Besides, computing Uk+1 only involves the matrix multiplication and matrix inverse, which can be easily performed by our LD-Minv in Eq. (3. Therefore, we let C(t) = LD-Minv (I + tP/2) in Eq. (6). Then we can obtain a learning-based optimization method for the problem with orthogonality constraints in Eq. (5).
3We choose the canonical metric on the tangent space as the equipped Riemannian metric.
3.1 APPLICATION: DIFFERENTIABLE SVD BY LD-MINV
In this part, we show how to utilize the previous general learning-based optimization framework to differentiate SVD. Given an arbitrary matrix M ∈ Rm×n̂ and its skinny SVD M = ŨS̃Ṽ>, the Von Neumann’s trace inequality, 〈M,X〉 ≤ ∑ i σi(M)σi(X), implies that {Ũ ∈ Rm×r, Ṽ ∈ Rn̂×r} is an optimal solution of the following optimization problem:
min U,V f(U,V) :=
( 1
2 ‖Dc (Λ)‖2F −
〈 M,UV> 〉) , s.t. U>U = V>V = I, (7)
where Λ := U>MV, Diag(·) returns the collection of the diagonal entries of the input matrix, and Dc(Λ) := (Id−Diag)(Λ) := Λ−Diag (Λ) for any input matrix Λ. Note that solving the problem in Eq. (7) is equivalent to performing SVD. Let {M,U0,V0} be the inputs of our Differentiable SVD (D-SVD). We adopt the Cayley transform in Eq. (6) to solve the problem in Eq. (7), and replace the matrix inverse in it by our proposed LD-Minv in Eq. (3). W.l.o.g, in the following, we only focus on the updating strategy of U due to the similarity between U and V. We first compute the gradient of the objective function f(U,V) w.r.t. U at {Uk,Vk}, which we shall denote by G = MVk ( Dc ( V>k M >Uk ) − I ) . Then we find a geodesic curve U(t) along the gradient on the Stiefel manifold for updating Uk. Referring to Eq. (6), the curve is U(t) = ( I + t2P )−1( I− t2P ) Uk, where P = GU>k −UkG>. Given the step size t, we can use
LD-Minv to perform the matrix inverse on the factor ( I + t2P ) . In summery, D-SVD consists of two steps: (1) find a proper t; (2) update {Uk,Vk} by the Cayley transform. For finding a proper step size t, we consider the following problem:
t∗U = argmin 0≤t≤ε f(t) := f(U(t),Vk), (8)
where ε is a given parameter to ensure a small enough magnitude of t∗U . Notice that if t is small enough to hold ∥∥ t 2P ∥∥ < 1, then we have (I + t2P)−1 = I + ∑∞l=1 (− t2P)l, which implies that
U(t) = ( I + 2 ∑∞ l=1 ( − t2P )l) Uk. Considering that t∗ in Eq. (8) is small, we can approximate
f(t) via its second order Taylor expansion at t = 0:
f(t) = f(0) + f ′(0) · t+ 1 2 f ′′(0) · t2 +O
( t2 ) , (9)
where f ′(0) and f ′′(0) are the first and the second order derivatives of f(t) evaluated at 0, respectively. These two derivatives have closed form and can be computed efficiently. Consequently, we can obtain an approximated optimal solution t∗ via:
t∗U = min { ε, t̃ } , where ε < 2/‖P‖, and t̃ = −f ′(0)/f ′′(0), (10)
provided that:{ f ′(0) = 〈 Dc ( U>k MVk ) ,Dc ( U>k PMVk )〉 + 〈MVk,PUk〉 ,
f ′′(0) = ∥∥Dc (U>k PMVk)∥∥2F + 〈Dc (U>k MVk),Dc (U>k P2MVk)〉− 〈MVk,P2Uk〉 .
(11)
Then we can update U by U(t∗U ), i.e., Uk+1 = U(t ∗ U ). Thanks to the cyclic property of trace operator, Vk+1 shares a similar update strategy with Uk+1.
Provided the input {M,Uk,Vk}, the complete iterative steps for our D-SVD are as follows4: GU = MVk ( Dc ( V>k M >Uk ) − I ) , PU = GUU > k −UkG>U , t∗U = min { 2 ‖PU‖ ,− f ′(U(t),Vk) f ′′(U(t),Vk) } + t̃Uk , Uk+1 = Cayley (t ∗ U ,CUk ,PU ,Uk), GV = M >Uk+1 ( Dc ( U>k+1MVk ) − I ) , PV = GV V > k −VkG>V ,
t∗V = min
{ 2
‖PV ‖ ,− f
′(Uk+1,V(t)) f ′′(Uk+1,V(t))
} + t̃Vk , Vk+1 = Cayley (t ∗ V ,CVk ,PV ,Vk),
(12)
4For convenience and clearing writing, we omit the superscript in the updating rules.
Algorithm 2 Forward Propagation for D-SVD Input: Training data {Mi,Ui,0,Vi,0}ni=1, depth Ksvd of D-SVD, number of iterations Tinv, step
size ηinv and D-SVD’s current parameters t̃. 1: for k = 0, . . . ,Ksvd do 2: Calculate PU,i and t∗U,i with {Mi,Ui,k,Vi,k}ni=1 by Eq. (12). 3: Train LD-Minv with parameters CUk by Algorithm 1 with the data {Ai}ni=1, number of iterations Tinv and step size ηinv, where Ai = (I + t∗U,iPU,i/2). 4: Repeat Steps 2 and 3 by switching the roles of U and V with the input {Ui,k+1,Vi,k}ni=1. 5: end for
Output: {CUk ,CVk} Ksvd k=1 and {Ui,k+1,Vi,k} n,Ksvd i=1,k=1.
Algorithm 3 Joint Training for D-SVD and LD-Minv Input: Training data {Mi}ni=1, number of iterations Tinv and Tsvd, step sizes ηsvd and ηinv.
1: Generate {Ui,0,Vi,0}ni=1 randomly from the r-dimensional Stiefel manifold. Set t̃(0) = 0. 2: for t = 1, · · · , Tsvd do 3: Forward propagate by Algorithm 2. 4: Update t̃(t+1) = t̃(t) − ηsvd · D[t̃=t̃(t)]Mn, whereMn := 1 nKsvd ∑n i=1 ∑Ksvd k=1 f(Ui,kVi,k).
5: end for Output: {Ui,k+1,Vi,k}n,Ksvdi=1,k=1.
where Cayley (t,C,P,U) := LD-Minv (I + tP/2,C)(I− tP/2)U,
t̃ := {(t̃Uk , t̃Vk) ∈ R2} Ksvd k=1 is the collection of learnable parameters for D-SVD, and LD-Minv(·,C) is a LD-Minv module with parameters C. Note that a DNN can easily implement our D-SVD. Each layer implements the steps in Eq. (12), which are the specific iterative procedures of the previous general learning-based optimization with orthogonality constraints. We adopt different LD-Minvs for different layers, and provide the training process for D-SVD in Algorithms 2 and 3. By introducing LD-Minv into the Cayley transform, we bypass the non-differentiable exact Minv, and solve the problem in Eq. (7) (i.e., perform SVD) in a differentiable and learning-based way.
4 MAIN RESULTS
In this section, we provide the main theoretical results of D-Minv, including the linear convergence of GD during training and the generalization performance. All the detailed proofs of these theorems are provided in the supplementary material. The theoretical results for D-SVD are presented in the supplementary material due to limited space.
4.1 CONVERGENCE RATE OF GD
We consider the general LD-Minv and large training data size n in this section. We first make the following assumption on the training data.
Assumption 1 (Bounded Singular Values). Given the training matrices { Ai ∈ Rd1×d2 }n i=1
, we assume the training matrices’ positive singular value vectors are d-dimensional, where d ≤ min{d1, d2}. Assume these positive singular values have lower and upper bounds, i.e.,
‖xi‖∞ ≤ 1 and min j∈[d] [xi]j ≥ b̄a > 0, where xi = σ+(Ai) ∈ R d, ∀i ∈ [n],
and σ+(·) extract the positive singular values.
In general, the boundedness assumption for the singular values the is weak and easy to satisfy. See the discussion in the supplementary material (Section C.1.2).
With this assumption, we can obtain the convergence rate of our Algorithm 1.
Theorem 1 (Convergence Rate). Suppose Assumption 1 holds and let d = min{d1, d2}. For any t, > 0, we let K = Ω ( db̄2a ) , then with probability at least 1− exp ( −O ( K/b̄2a )) , GD in Algorithm
1 with step size η = Θ(1/dL) can find a collection of coefficients such that:
Ln(C(T )) < , for T = Θ ( dLKnb̄−2a ln (1/ ) ) ,
where b̄a is the lower bound for the positive singular values of training data Ai.
This is known as the linear convergence rate because drops exponentially fast in T . Recall that K and L are from the definition of our LD-Minv.
Remark 1. Different from the traditional theoretical results of DNNs, the randomness in our results mainly comes from a re-weighting strategy (see Sec. E) rather than initialization. In general, our results hold for any initialization strategy as long as the condition in Algorithm 1 is met. Notably, our results do not require the depth or the width to be in the polynomial order of data size n, which is a prevalent over-parameterization assumption for the current deep learning theories.
Remark 2. To present our convergence result most simply, we choose to focus on GD method with fixed step size mainly. It shall be easy to extend our results to more general settings, such as stochastic GD and dynamic step size.
4.2 GENERALIZATION
We characterize the generalization performance of LD-Minv trained by GD in this theorem.
Theorem 2 (Generalization Bound). Denote by PA the distribution of Ai in Assumption 1. Suppose that α ≥ 1 andK2α−1/2 = Ω( √ n), then with probability at least 1−δ, the iterate C(t) of Algorithm
1 holds that:
E A∼PA
[ Ln ( C(t) )] ≤ Ln ( C(t) ) + Õ ( min { b̄2wd 3L
Kα/2 ,
√ d3K
n
}) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , T , where EA∼PA [Ln] is the expected value of Ln under the probability measure PA and Õ(·) hides log factors.
We adopt two ways to upper bound the empirical Rademacher complexity, which corresponds to the cases K < n and K > n, respectively. The final bound is the minimum of them (middle term in the above upper bound).
Remark 3. The second term in our bound distinguishes our result from most previous work (AllenZhu et al., 2019; Arora et al., 2019; Yehudai & Shamir, 2019; Cao & Gu, 2019; Chen et al., 2019; Xie et al., 2020) on the generalization bounds of over-parameterized neural networks. Specifically, most over-parameterized work mainly focus on establishing the bound that does not explode when the network width goes to infinity. However, their bounds are exponential in the depth, which may not hold when the depth is large. In contrast, our result covers a wider range of depth K: it covers the case of both K < n and K > n. One may wonder whether our bound will explore when K approaches infinity. The answer is no. We can observe a “double descent” trend to certain extent: the generalization error bound first increases with the network depth K when K < n, then it starts to decrease when K becomes larger even for K n.
5 CONCLUSION
We provide a differentiable and learnable way, named LD-Minv, to perform the matrix inverse by using a L-th order matrix polynomials. Then, we offer some theoretical guarantees about the convergence during training and generalization ability in both under-parameterization and overparameterization setting for LD-Minv. Remarkably, we are the first to provide the strict analysis for the training process of the differentiable learning-based methods. As an application of LD-Minv, we propose a learning-based optimization method to solve the problems with non-convex orthogonality constraints, and then utilize it to differentiate SVD. A generalization guarantee for D-SVD is also provided.
APPENDIX
A EXPERIMENTAL RESULTS
A.1 EFFECTIVENESS VALIDATION
We conduct experiments on synthetic data to validate the effectiveness of our proposed LD-Minv and D-SVD. We run our all experiments for 10 times and report the average results.
A.1.1 EXPERIMENTS RESULTS FOR LD-MINV
Settings. For LD-Minv, we adopt the square and rectangle matrix to test its performance. It should be mentioned that our LD-Minv is adaptive to the shape of the input matrix. In general, the square matrix can better test the performance of the algorithm, since the minimum singular value is near 0 in this case (Vershynin, 2010). And a large condition number of the input may make most numerical matrix calculation methods unstable. For the square matrix A ∈ Rd×d, its elements are sampled from i.i.d. Gaussian, namelyAi,j ∼ N (0, 1/ √ d). For the rectangle matrix A ∈ Rm×n̂, its elements
are also sampled from Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We adopt the SGD to optimize the parameters. Note that our theoretical results in Sec. 4 hold for GD, but they can easily extend to the SGD case. The learning rate is set to 1e-4, and it is reduced by half every 300 iterations. We also fix the batch size as 100, while the number of iteration is set to 1200. Note that, in the experiments, the adopted exact Pseudo matrix inverse (Minv) is implemented by Pytorch.
We adopt the following test loss to measure the performace:
Test Loss := 1
n n∑ i=1 ∥∥AiX{Kinv,i}Ai −Ai∥∥2F , where X{Kinv,i} is the final output of D-Minv or LD-Minv. Here n = 100 is the size of test data. With d = 100, we first test the influence of L and K, which correspond to the order of matrix polynomial and the number of layers in our LD-Minv, respectively. Due to high precision of LDMinv and D-Minv, for visualization, we take a base-10 logarithm on loss in this experiment.
Results. From Fig. 1(a), we can see that the number of layer K plays a more critical role for LDMinv than the order L of polynomial. When settingK = 7, the log of the test error drops from−1.0 for L = 4 to −4.6 for L = 9. However, the log of the test error drops from 0.32 for K = 2 to −4.6 for K = 7, when fixing L = 9. Hence, for efficiency, we suggest utilizing large K and small L for LD-Minv.
Compared with D-Minv, according to Fig. 1(b) and Fig. 1(c), LD-Minv can obtain a better test performance even for largeK andL, although D-Minv converges to Minv extremely fast in this case. Our result demonstrates that learning can help LD-Minv obtain better performance on a problemdependent data distribution rather than merely fixing the coefficients ahead for time. We notice that LD-Minv and D-Minv obtain a similar precision when K > 15, and the reason is that the approximation error for D-Minv is in the order of 1e− 8 in this case, which lifts very limited space for the improvement by learning.
Fig. 1(b) further verifies the importance of K. As shown in our theoretical results (Theorem 1 and Theorem 2); large K indicates smaller generalization bound and higher probability to obtain the linear convergence rate for Algorithm 1.
Another significant advantage of our proposed method is efficiency. D-Minv and LD-Minv can inverse the input matrix with high precision meanwhile with the low computational cost. The benefits of D-Minv and LD-Minv are apparent when the matrix dimension is large. As shown in Table 1, even for large K and L, the inference time of LD-Minv is less than a tenth of the time of exact matrix inverse when the size is 1000 × 1500. It is well-known that the Minv and matrix multiplication (MatMul) share the same computation complexity theoretically, while their speeds vary a lot numerically. The computation of our method is mostly spent on MatMul, which can be easily accelerated by GPU and is much cheaper than Minv. MatMul and Minv only share the same complexity asymptotically. The constants in the orderO(·) vary significantly, especially when matrix dimension d ∞. Moreover, the general MatMul algorithm with a complexity of O ( d3 )
isn’t optimal, and there exist algorithms that own a complexity of only O ( d2.3 ) .
To further investigate the importance of the learning procedure, we conduct the experiments on the matrices with different condition numbers. Given the condition number κ ≥ 1, we generate the data matrix by Ai = UiSiVi, where Ui and Vi are randomly sampled from a 100-dimensional
Stiefel manifold. Here Si is a diagonal matrix whose diagonal entries are i.i.d. and obey a uniform distribution U(1/κ, 1) with distribution boundaries being [1/κ, 1]. We let K = 4, L = 10 and the batch size be 100. We show the results in Table 2. Not surprisingly, D-Minv suffers from ill-condition problems, e.g., the large condition number deteriorates the performance of D-Minv. However, LD-Minv is not sensitive to the condition number, and it can beat D-Minv easily when the condition number is large. Note that all the results are reported on the unseen data.
A.1.2 EXPERIMENTAL RESULTS FOR D-SVD
Settings. For D-SVD, we adopt the rectangle matrix to run the experiments. Similarly, our algorithms also work for the square matrix. For the rectangle matrix A ∈ Rm×n̂, its elements are also sampled from i.i.d. Gaussian, namely Ai,j ∼ N (0, 1/ √ d), where d = max{m, n̂}. We let m = 50 and n̂ = 100, and utilize Algorithm 3 to solve our learning problem5. Similarly, we also perform the batch training with batch size 30. The learning rate is set to 1e − 2 for D-SVD and 1e − 4 for LD-Minv contained in D-SVD. We do not decay the learning rate during training, and let the number of iterations for D-SVD and LD-Minv be 200 and 100, respectively. For LD-Minv in each layer, we let L = 4 and K = 6.
Same as Sec. 3, we adopt the following loss to train our D-SVD:
Training Loss := 1
nKsvd n∑ i=1 Ksvd∑ k=1 ( 1 2 ‖Λi,k −Diag (Λi,k)‖2F − 〈 Mi,Ui,kV > i,k 〉) ,
where Ui,k and Vi,k are the k-th layer’s output of D-SVD, and Λi,k := U>i,kMiVi,k. The reason that we introduce the regularization term ∥∥∥U>i,kMiVi,k −Diag (U>i,kMiVi,k)∥∥∥2
F is the nonuniqueness. In general, any pair (U,V) ∈ { (ŨR, ṼR) : M = ŨS̃Ṽ>, R>R = RR> = I } can
maximize the function (〈 M,UV> 〉) . However, in order to obtain the exact singular vectors of M,
the matrix ( U>i,kMiVi,k ) should have zero off-diagonal entries. Thus, the regularization term is necessary for the training of D-SVD.
We use the following test loss to report the results:
Test Loss := 1
n n∑ i=1 ( 1 2 ‖Λi,Ksvd −Diag (Λi,Ksvd)‖ 2 F − 〈 Mi,Ui,KsvdV > i,Ksvd 〉) ,
where Ui,Ksvd and Vi,Ksvd is the final layer’s output of D-SVD, and Λi,Ksvd := U > i,Ksvd MiVi,Ksvd . Note that we also utilize the following quantity to measure the test performance.
SVD MSE := 1
n n∑ i=1 ∥∥Sort (Diag (U>i,KsvdMiVi,Ksvd))− SVD (Mi)∥∥2F , where Sort(·) is the operator that sort the input elements in the descending order, Diag(·) returns the collection of the entries at the diagonal of the input matrix, and SVD(·) gives out the singular values.
For convenience, we add the mean of the sum of SVD(Mi) to the loss in Table 3 to make the loss non-negative. Although increasingKsvd also can improve the performance in the non-learning case,
5We replace the GD part by SGD for the generality of the experiment.
learning can strengthen the role of Ksvd. Without training, the test loss for Ksvd = 60 is one eighth of the loss for Ksvd = 1; in sharp contrast, the reduction can reach one thousandth after training. Due to the introduced regularizer for the training loss, we can obtain a small SVD MSE rather than the small difference between the sum of singular values.
During the training of LD-SVD, we observe that our results are robust to the setting of K and L. In general, the forward process can converge even with very inexact matrix inverse (Li et al., 2019). Moreover, our introduced learnable step size { t̃Uk , t̃Vk } further provides the freedom to adjust the update direction for Uk and Vk. Thus, for efficiency, a relative small K and L is enough for each layer’s LD-Minv.
A.1.3 PLUG-AND-PLAY: DIFFERENTIABLE NON-BLIND DECONVOLUTION
Non-blind deconvolution (NbD) aims to restore the latent image z from corrupted observation y with known blur kernel b. In this experiment, we consider a well-known sparse coding formulation:
y = b⊗ z + n = BW>x + n,
where ⊗ is the convolution operator, x and n are the sparse code and unknown noises, respectively. B is the matrix form of kernel b, and W> is the inverse of the wavelet transform W (i.e., x = Wz and z = W>x). We utilize the following optimization to perform the non-blind deconvolution:
min x,z
g(x, z) := 1
2 ‖y −Bz‖22 + λ‖x‖1, s.t. z = W >x,
where λ > 0 is the balance parameter. A usual way to solve this problem is linearized ADMM (L-ADMM) (Lin et al., 2011), read as: ak = zk − αk ( λk + β ( zk −W>xk )) , zk+1 = ( αkBB > + I )−1( ak + αkB >y ) , bk = xk − γkW> ( λk + β ( zk+1 −W>xk )) , xk+1 = sgn (bk) max ( bk − γk λ , 0 ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(13) where αk > 0, γk > 0 and β > 0 are penalty parameters, is the Hadamard product, and λ is the Lagrange multiplier.
Differentiable NbD. Inspired by the Eq. (13), we can easily obtain the learning-based non-blind deconvolution method: ak = zk − αk Ĩk ( λk + β ( zk −W>xk )) , Tk = LD-Minv ( αkBB > + I,Ck ) zk+1 = Tk ( ak + αkB >y ) , bk = xk − γkW̃k ( λk + β ( zk+1 −W>xk )) , xk+1 = ReLU ( bk − γk λ ) − ReLU ( −bk − γk λ ) ,
λk+1 = λk + β ( zk+1 −W>xk+1 ) ,
(14) where SNbD := {Ĩk,W̃k, αk, γk, β}KNbDk=1 is the collection of all learnable parameters, Ĩk and W̃k are introduced learnable matrices (in the implementation, we choose them as the convolution layer
Algorithm 4 Joint Training for D-NbD and LD-Minv Input: Training data {yi}ni=1, number of iterations TNbD and Tinv, step sizes ηNbD and ηinv.
1: Set xi,0 = yi, zi,0 = W>xi,0 and λ0 = 0. 2: for t = 1, · · · , Tsvd do 3: Fix the learnable parameters S(t)NbD and train each layer’s LD-Minvs by Algorithm 1. 4: Update S(t+1)NbD = S
(t) NbD − ηNbD · D[S(t)NbD=SNbD] Qn, where Qn is presented in Eq.(15). 5: end for
Output: {zi,k}n,KNbDi=1,k=1.
with compatible size), Ck is the learnable parameters for each layer’s LD-Minv, and ReLU(a) = max{a, 0} is the Rectified Linear Unit (ReLU) function. Note that
sgn (b) max (b− γ, 0) = ReLU (b− γ)− ReLU (−b− γ).
Eq. (14) can be implemented by a DNN. Given the training images {yi}ni=1, we adopt the following loss to train our D-NbD:
Qn(SNbD) := 1
nKNbD n∑ i=1 KNbD∑ k=1 ( g(xi,k,W >xi,k) ) , (15)
where xi,k is the k-th layer’s output of D-NbD. The training process for D-NbD is given in Algorithm 4.
Settings. The training and test images come from Schmidt & Roth (2014), in which 300 (random selected) have been used for training and the others are used for test. We add different levels of the Gaussian noise to generate our corrupted observations {yi}ni=1. We set the patch size as 160× 160, and let the batch size be 15. Finally, Adam is adopted and executed 1000 epochs for learning rate ranging from 1e-4 to 1e-6 (it is reduced by half every 140 iterations). The observations {yi} are corrupted by blur kernel with size ranging from 17× 17 to 57× 57 and the levels of Gaussian noise varying from 1% to 3%.
Results. From Table 4, we find that fL-ADMM is much more efficient than aL-ADMM. fL-ADMM does not need to perform the inverse balbala since αk, γk are fixed, which however is also why fLADMM performs poorly. It can not find the universal αk, γk for all test images. aL-ADMM can perform better with adaptive step size, but it needs exact Minv in each iteration. Compared to them, D-NbD outperforms L-ADMM by a large margin in terms of speed and performance. Note that, for one iteration, the computation complexity of fL-ADMM is much lower than the other three since it only involves the MatMul. However, it consumes more than 600 iterations to obtain a relatively good solution. In fact, just comparing the Minv time in one iteration for aL-ADMM and D-NbD, our methods’ time is only almost half of that of exact Minv in PyTorch. Fortunately, the learnability of DNbD helps us obtain a better or equally good solution using one order-of-magnitude fewer iterations. Hence, the total time can only be one-tenth of that of aL-ADMM. Moreover, the learnability of LDMinv further introduces more flexibility, which allows D-NbD(LD-Minv) to achieve better results within fewer layers. Thus, D-NbD(LD-Minv) is more efficient due to less calculation.
A.2 DISCUSSION
Not only D-NbD, based on the LD-Minv and D-SVD, but there are also rich potential applications, including image recovery or denoising, numerical algorithm acceleration, blind deconvolution, sparse coding, subspace clustering, etc. All these compelling applications may be beyond this paper’s scope, and we leave them as future work. Moreover, due to the differentiability, D-SVD can help us design more abundant singular-value-related training loss for DNN, which is impossible in the previous non-differentiable situation.
B NOTATION
Positive definite matrix A is denoted by A 0. Transpose of the matrix A ∈ Rm×n is A>. We denote by A−1 and A† the matrix inverse and Moore–Penrose inverse, respectively. The Euclidean inner product between two matrices A ∈ Rm×n and B ∈ Rm×n is defined as 〈A,B〉 := Tr ( A>B ) , where Tr(·) is the trace of a matrix. κ(A) := ‖A†‖‖A‖ is the generalized condition number of a matrix, where ‖·‖ is the spectral norm. We denote by σ(A) the singular values vectors (no need to be ordered) of the matrix A. Similarly, we denote by σ+(A) the positive singular values vectors.
Let ‖·‖F be Frobenius norm. Given a differentiable function the first and second derivative of F at Y are denoted by: linear operator D[X=Y]F(·) : Rm → Rn := ( ∂Fi(Y) ∂Xj ) (·) and quadratic
operator D2[X=Y]F(·, ·) : R m × Rm → Rn := ( ∂2Fi(Y) ∂Xj∂Xk ) (·, ·), respectively, where Xi is the i-th entry of X. We also denote by F−1 the inverse of the function F and let the composition map f ◦ g(x) := f(g(x)). is the Hadamard product and [K] = {1, 2, · · · ,K}.
C ADDITIONAL THEORETICAL RESULTS
In this section, we provide the additional theoretical results of D-Minv and D-SVD. We start by a warm-up case for D-Minv.
C.1 RESULTS FOR LD-MINV
C.1.1 WARM-UP: ONE PARAMETER FOR ONE LAYER
As a warm-up case, we consider only to learn one coefficient for each layer, as a special case of learnable D-Minv:
Xk+1 = Xk ( L−1∑ l=0 Elk + CkE L k ) , Ek = I−AXk, (16)
where Ck ∈ R are the learnable coefficients. By setting X0 = 1c0 A >, where c0 > ‖A‖2, and let Ck = 0 for all k, by Lemma 1, we know that ∥∥AA† −AXk∥∥ = eLk0 . One important question is if we let the coefficient of ELk to be learnable, by training, will D-Minv obtain a faster convergence rate?
In general, previous works show that learning-based methods share the same convergence order with their fixed version but with a better statistical constant. Interestingly, beyond the constant, we found that D-Minv can improve the order of convergence by learning. Lemma 2 (Learnability Matters). Assume these is only one training data A. Provided that X0 = 1 c0 A>, where c0 > ‖A‖2, then we have:
ek+1 = (1− Ck) · eLk + Ck · eL+1k , where ek = ∥∥AA† −AXk∥∥, and k ∈ [K], L ≥ 4.
Moreover, the gradient of L w.r.t. Ck is negative, i.e., D[Ck]L < 0 for all Ck ∈ [0, 1 + ], where > 0 depends on ek−1. Additionally, D[Ck]L ≈ 0 for k ∈ [K − 1] when Ck ∈ [1− , 1 + ].
Starting from 0, according to the gradient’s negativeness, the coefficients Ck for k ∈ [K − 1] will converge to the range [1 − , 1 + ] when adopting a proper step size for GD. In this case, i.e., Ck
are around 1 for all k, the sequence {Xk} converges to the Moore–Penrose inverse in L̃-th order, where L < L̃ ≤ L+ 1. In other words, by learning from one data, D-Minv finds a way to improve the order of convergence for any input. Remarkably, Ck = 1 for all k is not a global or even local minimum for the training loss L. For the learnable coefficient CK at the last layer, satisfying its first order condition usually implies obtaining the exact solution, i.e., eK+1 = 0. In summary, not only improve the convergence speed, learnable D-Minv makes the exact fitting come true.
C.1.2 DISCUSSION FOR ASSUMPTION 1
In general, the bounded assumption for the singular values the is weak and easy to satisfy. For example, singular values of the matrices with i.i.d. zero mean and unit variance entries asymptotically obey the Quadrant Law (Dozier & Silverstein, 2007; Shen, 2001), i.e., σ ∼ 1π √ 4− σ2, σ ∈ [0, 2], which implies the singular values have an upper bound. On the other hand, a well-known fact from the random matrix theory is a zero mean and unit variance random matrix is non-singular with high probability. Even for the square matrix A, with probability at least 1 − δ, we still we have σmin(A) ≥ δd−1/2 (Rudelson & Vershynin, 2008), where σmin(A) is the smallest singular value of the matrix A.
C.1.3 LIPSCHITZ SMOOTHNESS BEFORE GENERALIZATION
We would also like to remark that our generalization result can easily be extended to the stochastic case. The proof is the same as the Lemma 4.3 in Ji & Telgarsky (2019) or Theorem 3.3 in Cao & Gu (2019).
In general, smoothness plays an important role in the generalization analysis. The covering number of a smooth function usually is small. We start by showing the local smoothness of learnable DMinv.
Proposition 2 (Local Lipschitz smoothness). Define a set of coefficients collection: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
Let PA = AA†. Given any coefficients collection C ∈ C, we denote by Gk(·) the map from PAEk to PAEk+1 in (3), i.e., Gk(E) := PA−PA(I−E) (∑L−1 l=0 C{k,l}E l ) , and letHk(·) := R◦GK−1◦
· · · ◦ Gk(·), whereR(E) := ∑K k̂=k ∥∥Wk̂Ek̂A∥∥2F . Suppose ‖A‖2 ≤ 1 and ∀k ‖Wk‖ = O(1), then we have: ∣∣∣Hk(Ê)−Hk(Ẽ)∣∣∣ ≤ 2∥∥∥PAÊ−PAẼ∥∥∥
∗ , ∀k ∈ [K],
where Ê and Ẽ are with the same size and same norm upper bound as Ek, d is the dimension of positive singular value vector σ+(A), and ‖·‖∗ is the matrix nuclear norm.
In general, the composition of two Lipschitz continuous functions leads to a worse Lipschitz constant. However, our LD-Minv shares a consistent constant for all layers. This is important since the layer-wisely local smoothness usually implies a small covering number of the whole DMinv (Bartlett et al., 2017; Wei & Ma, 2019), which results in a tight generalization bound.
C.2 RESULTS FOR D-SVD
We characterize the generalization performance of D-SVD trained by GD in this theorem. First of all, we present several definitions. Let
Mn(t̃,C) := 1
nKsvd n∑ i=1 Ksvd∑ k=1 〈 Mi,Ui,kV > i,k 〉 ,
where (t̃,C) are the collection of the parameters of D-SVD and the LD-Minvs in each layer. Denote by f(U | V,M, t̃k,Ck) the operations in Eq. (12) that map Uk to Uk+1, i.e.,
Uk+1 := f(Uk | Vk,M, t̃k,Ck), (17)
where (t̃k,Ck) ∈ R×RLKinv are the learnable parameters of D-SVD and LD-Minv at the k-th layer of D-SVD. We define the coefficients set:
Ck := [ −εu
2 ,+ εu 2
] × { C : ∥∥∥C−C(0)∥∥∥
F = Õ ( 1√ Kinv )} ,
where εu := min
i
1
‖Pi‖ , in which Pi = MiVkU>k −UkV>M>i , ∀i ∈ [n],
Proposition 3 (Covering Number of Single Layer). Denote F the class of function in Eq. (17), i.e., Fk := { f(Uk | Vk,M, t̃k,Ck) : ( t̃k,Ck ) ∈ Ck } .
Then we have: lnN (Fk, , ‖·‖)) = O ( LKinv ln ( 1
√ Kinv
) + ln ( ‖M‖ )) .
Theorem 3 (Generalization Bound for D-SVD). Denote by PM the distribution of Mi and let d := max{m, n̂}. Suppose that ‖Mi‖ ≤ 1, ∀i ∈ [n] and Kinv = Ω ( max{nd,K2svd} ) , then with probability at least 1− δ, the iterate t̃(t) of Algorithm 3 holds that:
E M∼PM
[ Mn ( t̃(t) )] ≤Mn ( t̃(t) ) + Õ (√ K3svdL
n
) +O (√ ln(1/δ)
n
) ,
for t = 0, 1, · · · , Tsvd, where EM∼PM [Mn] is the expected value of Mn under the probability measure PM and Õ(·) hides log factors.
D OMITTED PROOF FOR FIXED D-MINV IN SECTION 2
D.1 PROOF OF LEMMA 1
We first verify that e0 < 1, suppose rank(A) = r. e0 = ∥∥AA† −AX0∥∥ = ∥∥∥∥UU> − 1cUΣ2U> ∥∥∥∥ = ∥∥∥∥Ir − 1cΣ2 ∥∥∥∥ < 1,
where A = UΣV>, Ir ∈ Rr×r is the identity matrix, and the last inequality comes from the fact c > 12‖A‖ 2. Form Eq. (2), we have
AA† −AXk+1 = AA†(I−AXk+1) = AA† ( I−AXk ( L−1∑ l=0 Elk ))
=AA† ( I− (I−Ek) ( L−1∑ l=0 Elk )) = AA† ( I− L−1∑ l=0 Elk + L−1∑ l=0 El+1k ) =AA† ( ELk ) = AA†(I−AXk)L = ( AA† −AXk )L ,
i.e., ∥∥AA† −AXk∥∥ = ∥∥AA† −AXk−1∥∥L = eLk0 . We finish the proof.
D.2 PROOF OF PROPOSITION 1
Note that for any matrix X, the matrix polynomial of X share the same column and row space with X. In general, the iterative equation (2) does not change the column and row space from the begining. We finish the proof by induction. The conclusion is obvious for X0 due to X0 = 1c0 A
>. We assume the proposition holds for Xk, then:
A†AXk+1 = A †AXk ( L−1∑ l=0 Elk ) = Xk ( L−1∑ l=0 Elk ) = Xk+1,
and
Xk+1AA † = Xk ( L−1∑ l=0 Elk ) AA† = Xk ( L−1∑ l=0 (I−AXk)l ) AA†
=XkAA † ( L−1∑ l=0 (I−AXk)l ) = Xk+1,
Hence, we can conclude that: A†AXk = Xk, XkAA
†, ∀k. We finish the proof.
E OMITTED PROOF FOR LEARNABLE D-MINV IN SECTIONS 4 AND C
In the theoretical part, we consider a more general training regression loss:
Ln(C) := 1
2nK n∑ i=1 K∑ k=1 ∥∥V{k,i}(AiX{k,i}Ai −Ai)∥∥2F , (18) where C = {C{k,l}, (k, l + 1) ∈ [K] × [L]} is the collection of all learnable parameters, V{k,i} is a diagonal matrix whose diagonal entries are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v). Note that V{k,i}’s are fixed during training and testing. Introducing V{k,i} has two advantages. One is re-weighting the error term. As we can see from the fixed D-Minv, the error term decays exponentially. In that way, the smallest positive singular value dominate the gradient flow. However, for LD-Minv, we hope that the gradient flow comes from all positive singular values’ loss during training. The random re-weighting strategy is a good choice for numerical calculation when the order of singular values is unknown. Another advantage is that the introduced V{k,i}s, with very high probability (see Claim 1 in Sec. E.2), make the landscape of the training loss locally strongly convex, which is an excellent benefit from the optimization perspective.
Notably, by setting the diagonal entries of V{k,i} as the independent symmetric Bernoulli random variables, the training loss here in Eq. (18) will degenerate into the loss given in Eq. (4). Hence, all the theoretical results in this section proven for Eq. (18) also hold for Eq. (4).
It is worth mentioning that the weight matrices {V{k,i}} here has nothing to do with the learned singular vectors {Vi,k} for D-SVD. Please distinguish them.
E.1 GENERAL ANALYSIS
Before start the proof, we first make some general analysis. It is easy to see that Proposition 1 also holds for learnable D-Minv. Thus, we can observe that:
XkE l k = XkAA †Elk, ∀(k, l). Hence, we obtain an equivalent form of Eq. (3):
Xk+1 = Xk ( L−1∑ l=0 C{k,l}Ẽ l k ) , Ẽk = AA † −AXk, L ≥ 4, (19)
Due to the invariance of the row and column space, we can only concern on the singular values vectors and rewrite Eq. (19) as :
xk+1 = xk ( L−1∑ l=0 C{k,l}e l k ) , ek = 1− a xk, L ≥ 4, (20)
where 1 ∈ Rd is the all-one vector (d is the dimension of the positive singular vector σ+(A)), and:
a = σ+(A), xk = σ+(Xk), ek = σ+ ( Ẽk ) , elk = ek · · · ek︸ ︷︷ ︸
l
; (21)
recall that σ(A) the singular values vectors (no need to
Remark 4. We only utilize Eq. (19), the equivalent form of Eq. (3), in this section for the convenience of theoretical analysis. For implementation, we still use Eq. (3) and do NOT calculate the pseudo inverse A† throughout learning.
Now we continue the derivation:
ek+1 = 1− a xk+1 = 1− a xk ( L−1∑ l=0 C{k,l}e l k )
= 1− (1− ek) ( L−1∑ l=0 C{k,l}e l k ) = 1− ( L−1∑ l=0 C{k,l}e l k − L−1∑ l=0 C{k,l}e l+1 k )
= eLk + (1− Ck,0)1 + ( L−1∑ l=1 ( C{k,l−1} − C{k,l} ) elk ) + ( C{k,L−1} − 1 ) eLk
= eLk + Êkck,
(22)
where e0k = 1, Êk ∈ Rd×(L+1) := [ e0k; e 1 k; · · · ; eLk ] , (23) and
ck ∈ RL+1 := [ 1− Ck,0, C{k,0} − C{k,1}, · · · , C{k,L−2} − C{k,L−1}, C{k,L−1} − 1 ]> . (24)
We define:
`(C,A) := 1
2 K∑ k=1 ‖Vk(AXkA−A)‖2F = 1 2 ∥∥∥∥∥ K∑ k=1 vj ( a2 xk − a )∥∥∥∥∥ 2
2
= 1
2 K∑ j=1 ‖vk a ek‖22,
(25) where C = {C{k,l}, (k, l+1) ∈ [K]× [L]} is the collection of all learnable parameters for D-Minv, Vk is a diagonal matrix whose diagonal entries vk = diag (Vk) are i.i.d. and obey a zero mean bounded distribution (w.l.o.g. we let the bound be ±b̄v), and ‖·‖2 is the vector `2 norm; recall the definitions of a, ek and xk from Eq. (21). Note that vk, ∀k is fixed during training. From Lemma 1, we know that
∥∥AA† −AXk∥∥ decays extremely fast. We can run fixed D-Minv for several times before training learnable D-Minv. In this case, X0 is the output of the K̃-layer fixed D-Minv. Hence, without loss of generality (w.l.o.g.), we suppose the following assumption holds in this section.
Assumption 2 (Well-Bounded E0). Assume ∥∥∥Ẽ0∥∥∥ = ‖e0‖∞ ≤ 12 .
We first provide several propositions of learnable D-Minv. Proposition 4 (Upper Bound for Perturbation). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l + 1) ∈ [K]× [L], then we have: ∥∥∥Ẽk∥∥∥ = ‖ek‖∞ ≤ eLk0 + 178 δ, ∀k ∈ [K]. Proof. We first show that ‖ek‖∞ < 1 2 for all k by induction. By Eq. (22), for k = 0, we know that:
‖e1‖∞ ≤ ‖e0‖ L ∞ + ∥∥∥Ê0c0∥∥∥ ∞ ≤ ‖e0‖L∞ + max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Ê0] i,j ∣∣∣∣ ‖c0‖∞ ≤ ‖e0‖L∞ + 2δ < 12 .
Now, we assume ‖ek‖∞ < 1 2 holds. Then,
‖ek+1‖∞ ≤ ‖ek‖ L ∞+ ∥∥∥Êkck∥∥∥ ∞ ≤ ‖ek‖L∞+ max 1≤i≤d L+1∑ j=1 ∣∣∣∣[Êk] i,j ∣∣∣∣ ‖ck‖∞ ≤ ‖ek‖L∞+2δ < 12 .
Since δ < 18 , we have 2δ < 1 4 . By Lemma 3, we can conclude that:
‖ek‖∞ ≤ e Lk 0 + (1 + )2δ, where 1
16 > → 0 as k →∞.
We finish the proof.
Proposition 5 (Upper Bound First-Order Derivative). Suppose that ∣∣C{k,l} − 1∣∣ = δ < 18 , ∀ (k, l+ 1) ∈ [K]× [L] and ‖a‖∞ ≤ 1, then we have:
∂`(C,A) ∂C{k,l} ≤ 4 ( 1 3 )l b̄2vd.
Proof. Note that:
∂`(C,A)
∂C{k,l} = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂C{k,l}
〉 .
By Eq. (22), we also have: ∂ek+1 ∂ek = Diag ( LeL−1k + Ê ′ kck ) ,
where Diag(e) is a diagonal matrix with the diagonal entries being e, and: Ê′k ∈ Rd×(L+1) := [ 0; e0k; · · · ;LeL−1k ] ;
here 0 ∈ Rd is the all zero vector. Hence, it holds that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ = ∥∥∥LeL−1k + Ê′kck∥∥∥∞ ≤ LeL−1k + ek(1− ek)2 · δ, (26)
where ek = ‖ek‖∞. By Proposition 4, we know that ek ≤ eL k 0 + 17 8 δ < 1 3 . Thus, we have:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 . (27) It is easy to see:
∂`(C,A)
∂ek = K∑ k̂=k
〈 ∂`(C,A)
∂ek̂ , ∂ek̂ ∂ek̂−1 ◦ · · · ◦ ∂ek+1 ∂ek
〉 .
Hence, we have∥∥∥∥∂`(C,A)∂ek̂ ∥∥∥∥ 2 ≤ K∑ k̂=k (∥∥vk̂ a (vk̂ a ek̂)∥∥2) · 4k−k̂ < 2b̄2v√d. According to Eq. (22), we notice:∥∥∥∥ ∂ek+1∂C{k,l} ∥∥∥∥ 2 = ∥∥el+1k − elk∥∥2 ≤ 2∥∥elk∥∥2.
Combing all things together, we conclude that:∣∣∣∣∂`(C,A)∂C{k,l} ∣∣∣∣ ≤ K∑
k̂=k
4k−k̂ ( 4b̄2v √ d ∥∥elk∥∥2) ≤ 4(13 )l b̄2vd.
We finish the proof.
Proposition 6 (Second Order Approximation). Define a set of coefficient collection around 1: C∗ := { C | ∀ C{k,l} ∈ C, ∣∣C{k,l} − 1∣∣ ≤ 1
8
} , (k, l) ∈ [K]× [L], L ≥ 4.
For any C1,C2 ∈ C∗, we have:
`(C1,A) = `(C2,A) +
〈 ∂`(C2,A)
∂C ,C1 −C2
〉 +O ( b̄2vdKL ) ‖C1 −C2‖22.
Proof. Note that for any C ∈ C∗, we have: ∂2`(C,A)
∂C{k,l}∂C{k′,l′} =
〈 ∂`(C,A)
∂eK , ∂2eK ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉 = 〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ ∂ 2eK−1 ∂C{k,l}∂C{k′,l′}
〉 + 〈 ∂`(C,A)
∂eK , ∂2eK ∂eK−1∂eK−1 ( ∂eK−1 ∂C{k,l} , ∂eK−1 ∂C{k′,l′} )〉 +
〈 ∂eK ∂C{k,l} , ∂2`(C,A) ∂eK∂eK ∂eK ∂C{k′,l′} 〉
= K∑ k̂=max{k,k′}
〈 ∂`(C,A)
∂eK , ∂eK ∂eK−1 ◦ · · · ◦ ∂2ek̂+1 ∂ek̂∂ek̂
( ∂ek̂
∂C{k,l} , ∂ek̂ ∂C{k′,l′}
)〉 + ∆(k, k′, l, l′),
where we let ek+1 := `(C,A) with a little abuse of notation, and:
∆(k, k′, l, l′) = 0 if (k, l) = (k′, l′),〈
∂`(C,A) ∂eK , ∂eK∂eK−1 ◦ · · · ◦ ∂(el+1k −e l k) ∂C{k′,l′}
〉 otherwise.
Hence we can get the upper bound for the entry ∂ 2`(C,A)
∂C{k,l}∂C{k′,l′} . From the proof in Proposition 5,
we know that:∥∥∥∥∂ek+1∂ek ∥∥∥∥ ≤ 14 , ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 1 ≤ d ( 1 4 )k̂−k+l , and ∥∥∥∥ ∂ek̂∂C{k,l} ∥∥∥∥ 2 ≤ 2 √ d ( 1 4 )k̂−k+l .
Note that: ∂2ek+1 ∂ek∂ek = TDiag | 1. What is the main contribution of the paper regarding matrix inverse and SVD computations?
2. What is the assumption behind the proposed approach, and how does it limit its usefulness?
3. How does the proposed approach compare to previous methods in terms of computation and memory costs, and how does it handle large-scale data?
4. Are there any experimental results shown in the paper to support the effectiveness of the proposed approach? If so, how do they compare to other methods?
5. How does the paper's proposal differ from other approaches that aim to improve the efficiency of SVD, such as those mentioned in the review (Halko et al., Liberty et al.)? | Review | Review
This paper proposes a deep neural network-based approach that computes matrix inverse and SVD for massive data. Let A be a given matrix and X be a matrix corresponding to A^-1, the proposed approach is based on the observation that (AX)^-1 can be represented as the Neumann series. By introducing a learnable coefficient, this paper develops an approach that approximately computes matrix inverse of the given matrix. Besides, it introduces an approach to compute SVD based on the proposed approach.
In my understanding, the proposed approach is based on the assumption that lim_l (I-AX)^l =0 since it uses the Neumann series to represent (AX)^-1, although this is not explicitly described in the paper. This assumption could limit the usefulness of the proposed approach.
As described in Section 1, the motivation of the paper is to improve the efficiency of matrix inversion and SVD, computation and memory costs of the proposed approach are not shown in the paper. Therefore, it is difficult to evaluate the superiority of the proposed approach against previous approaches. As shown in Equation (3), D-Minv can be represented in the iterative form. Since the size of matrix E_k is d \times d, D-Minv seems to need O(d^3L) time and O(d^2) space. When handling large-scale data, these costs are impractical. Besides, since no experimental results are shown in the body of the paper, it is difficult to evaluate the effectiveness of the proposed approach; the paper is not self-contained. Even though experimental results are shown in the appendix, matrix sizes are quite small although the motivation of the paper is to handle massive data. To improve efficiency of SVD, several approaches have been proposed. So, it is good to compare the proposed approach to the previous approaches.
Finding Structure with Randomness: Probabilistic Algorithms for Constructing Approximate Matrix Decompositions, Halko et al.,SIAM Rev., 53(2), 217–288.
Simple and Deterministic Matrix Sketching, Liberty et al., KDD 2013: 581-588 |
ICLR | Title
Resolving Training Biases via Influence-based Data Relabeling
Abstract
The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on data resampling have employed influence functions to identify harmful training samples that will degrade model’s test performance. They have shown that discarding or downweighting the identified harmful training samples is an effective way to resolve training biases. In this work, we move one step forward and propose an influence-based relabeling framework named RDIA for reusing harmful training samples toward better model performance. To achieve this, we use influence functions to estimate how relabeling a training sample would affect model’s test performance and further develop a novel relabeling function R. We theoretically prove that applying R to relabel harmful training samples allows the model to achieve lower test loss than simply discarding them for any classification tasks using cross-entropy loss. Extensive experiments on ten real-world datasets demonstrate RDIA outperforms the state-of-the-art data resampling methods and improves model’s robustness against label noise.
1 INTRODUCTION
Training data plays an inevitably important role in delivering the model’s final performance. It has been well recognized that the training bias issue will compromise model performance to a large extent (Arpit et al., 2017). Specifically, there are two major scenarios where training biases show up. The first and most common scenario is that training samples involve corrupted labels that could be originated at possibly every step of the data lifecycle (Anderson & McGrew, 2017; Dolatshah et al., 2018; Pei et al., 2020; Yu & Qin, 2020). The second scenario is that the training and test sets are sampled from the respective distributions Ptrain(x, y) and Ptest(x, y), but Ptrain is different from Ptest (Guo et al., 2020; He & Garcia, 2009; Zou et al., 2019). Both corrupted labels and distribution mismatch will hurt the generalization ability of a trained model (Fang et al., 2020; Zhang et al., 2017; Chen et al., 2021). We generally refer to training samples with corrupted labels or those inducing distribution mismatch as harmful samples.
Data resampling is a widely used strategy to deal with harmful training samples. Existing resampling approaches (Chawla et al., 2002; Mahmoody et al., 2016; Malach & Shalev-Shwartz, 2017; Ren et al., 2018) propose to assign different weights to training samples, which aim to mitigate the negative impacts of harmful samples on model’s generalization ability. Among them, most resampling approaches (Arazo et al., 2019; Han et al., 2018; Li et al., 2020; Wang et al., 2020a) rely on training loss to identify harmful samples from the whole training set. They follow the insight that the samples with higher training losses are very likely to have corrupted labels, and hence it is often beneficial to downweight them during the process of model training. However, such loss-based resampling methods have two limitations. First, they are only able to deal with the training biases caused by training samples with corrupted labels (aka noisy samples). Second, the small-loss trick typically holds true for deep models but not for any predictive models (Zhang et al., 2017). To address the limitations, one recent work (Wang et al., 2020b) proposes a new resampling scheme based on influence functions (Cook & Weisberg, 1980). The idea is to estimate the influence of each training sample on model’s predictions over the test set. Any training samples that would cause an
increase in the test loss are considered as harmful and will be downweighted afterwards. It is worth mentioning that the influence functions have been proved to deal with two forms of training biases effectively, and it is agnostic to a specific model or data type (Koh & Liang, 2017; Koh et al., 2019).
Inspired by the success of influence-based data resampling, in this paper, we would like to ask the following question: what would happen if we relabel harmful training data based on influence analysis results? Our motivations on performing data relabeling via influence analysis are twofold. (i) Relabeling noisy samples is able to prevent the model from memorizing the corrupted labels. (ii) Relabeling clean but biased samples is helpful to improve model’s robustness to harmful samples. Despite the potential benefits of data relabeling, it is still challenging to develop an influence-based relabeling approach that has a theoretical guarantee on the model’s performance improvement after training with relabeled data.
To answer the question, we first follow (Koh et al., 2019) to measure the influence of each training sample on model’s predictions and identify the harmful training samples which would cause an increase in the test loss. Next, we investigate whether relabeling the identified harmful samples rather than discarding them can improve the test performance. To achieve this, we start from binary classification tasks where relabeling a training sample is to convert its binary label from y to 1− y. We theoretically prove that relabeling harmful training data via influence analysis can achieve lower test loss than simply discarding them for binary classification. Furthermore, we design a novel relabeling function R for multi-class classification tasks and prove that the advantage of relabeling the identified harmful samples using R in reducing model’s test loss still holds. Following the influence-based resampling approaches (Wang et al., 2018; Ting & Brochu, 2018; Ren et al., 2020; Wang et al., 2020b), we only use the test loss for theoretical analysis and empirically calculate influence function with a small but unbiased validation set by assuming the validation set is sampled from the same distribution as the test set. In this way, using the validation loss to calculate the influence function is an unbiased estimation of the true influence function. Otherwise, the problem may lie in the category of transfer learning which is beyond the scope of this work.
To summarize, this work makes the following contributions. First, we propose to combine influence functions with data relabeling for reducing training biases and we develop an end-to-end influencebased relabeling framework named RDIA that reuses harmful training samples toward better model performance. Second, we design a novel relabeling function R and theoretically prove that applying R over harmful training samples identified by influence functions is able to achieve lower test loss for any classification tasks using cross-entropy loss function. Third, we conduct extensive experiments on real datasets in different domains. The results demonstrate that (i) RDIA is effective in reusing harmful training samples towards higher model performance, surpassing the existing influence-based resampling approaches, and (ii) RDIA improves model’s robustness to label noise, outperforming the current resampling methods by large margins.
2 BACKGROUND
Let D = {(xi, yi) ∈ X × Y | 1 ≤ i ≤ N} be the training set which are sampled from Ptrain(x, y). Let zi = (xi, yi) where xi ∈ Rd and yi ∈ RK . Let ϕ(x, θ) be a model’s prediction for x, where θ ∈ Rp is the parameter set of the model. We denote the loss of sample zi by l(zi, θ) = L(yi, ϕ(xi, θ)) and use li(θ) to represent l(zi, θ). We consider the standard empirical risk minimization (ERM) as the optimization objective. Formally, the empirical risk over D is defined as: L(D; θ) = 1N ∑N i=1 li(θ). Since our relabeling function is dependent to the loss function, we focus on the most effective and versatile loss, i.e., Cross Entropy loss for any classification tasks.
Influence functions. Influence functions, stemming from Robust Statistics (Huber, 2004), have provided an efficient way to estimate how a small perturbation of a training sample would change the model’s predictions (Koh & Liang, 2017; Koh et al., 2019; Yu et al., 2020). Let θ̂ = arg minθ 1 N ∑N n=1 ln(θ) be the optimal model parameters on convergence. When upweighting a training sample zi on its loss term by an infinitesimal step i, we obtain the new optimal parameters θ̂ i on convergence as : θ̂ i = arg min
θ
1 N ∑N n=1 ln(θi)+ ili(θ). Based on influence functions (Cook
& Weisberg, 1980; Koh & Liang, 2017), we have the following closed-form expression to estimate
the change in model parameters when upweighting zi by i:
ψθ(zi) , dθ̂ i d i | i=0= −H −1 θ̂ Oθli(θ̂), (1)
whereHθ̂ , 1 N ∑N n=1 O 2 θln(θ̂) is the Hessian matrix and O 2 θln(θ) is the second derivative of the loss at training point zn with respect to θ. Using the chain rule, we can estimate the change of model’s prediction at a test data zcj sampled from the given test distribution Ptest (Koh & Liang, 2017) :
Φθ(zi, z c j ) ,
dlj(θ̂ i)
d i | i=0= −Oθlj(θ̂)H −1 θ̂ Oθli(θ̂). (2)
At a fine-grained level, we can measure the influence of perturbing training sample zi from (xi, yi) to (xi, yi + δ). Let ziδ = (xi, yi + δ) and the new loss li(ziδ, θ) = L(yi + δ, ϕ(xi, θ)). According to (Koh & Liang, 2017), the optimal parameters θ̂ iδi after performing perturbation on zi is θ̂ iδi = arg min
θ
1 N ∑N n=1 ln(θ) + ili(ziδ, θ) − ili(θ). This allows us to estimate the change in model
parameters after the fine-grained data perturbation using influence functions as:
dθ̂ iδi d i | i=0 = ψθ(ziδ)− ψθ(zi)
= −H−1 θ̂
(Oθli(ziδ, θ̂)− Oθli(θ̂)). (3)
Further, the influence of perturbing zi by ziδ on model’s prediction at test sample zcj is the following:
ηθδ(zi, z c j ) ,
dlj(θ̂ iδi)
d i | i=0
= −Oθlj(θ̂)H−1θ̂ (Oθli(ziδ, θ̂)− Oθli(θ̂)). (4)
It is important to notice that Eq. (4) holds for arbitrary δ when i is approaching 0. This provides the feasibility of measuring how relabeling a training samples could influence the model’s predictions.
Influence-based resampling approaches. Previous researches (Koh & Liang, 2017; Wang et al., 2020b) have shown that influence functions have strong ability to identify harmful samples from the whole training set, which is agnostic to the specific model or data structure. Inspired by this, many influence-based resampling approaches (Ting & Brochu, 2018; Wang et al., 2018; 2020b) proposed to discard or downweight the identified harmful samples to reduce the test loss. However, different from previous works which focus on estimating the influence of each training sample on the test performance using Eq. (1)-(2), we perform the fine-grained perturbation on a training sample’s label and evaluate its influence using Eq. (3)-(4). Further, our work tries to build an end-to-end influence-based relabeling framework to reuse the harmful samples with a theoretical guarantee on the final model performance for any classification tasks. To be specific, we demonstrate that harmful training instances after being relabeled properly do make contributions to improve the final model performance, which provides a novel viewpoint on handling biased training data.
3 METHODOLOGY
Assume we have Q = {(xcj , ycj) ∈ X × Y | 1 ≤ j ≤ M} sampled from the test distribution Ptest and our objective is to minimize the test risk L(Q; θ) = 1M ∑M j=1 l c j(θ). Due to the harmful training samples in D, the optimal θ̂ which minimizes the empirical risk over training set D may not be the best risk minimizer over Q. To solve this issue, we propose a novel data relabeling framework named RDIA which aims to identify and reuse harmful training instances towards better model performance. We design a relabeling function R that allows the model to achieve lower test risk after being trained with the relabeled harmful instances for any classification tasks. In what follows, we first give an overview of the RDIA framework. Then we describe the details of the major steps in RDIA and provide theoretical analysis on how the relabeled harmful samples are useful to further reduce the test risk. The algorithm of RDIA could be found in Appendix A
3.1 OVERVIEW OF RDIA
Figure 1 provides an overview of RDIA, which consists of four major steps: Model training, Harmful samples identification, Relabeling harmful samples via influence analysis and Model retraining.
Step I: Model training is to train a model based on the training set D until convergence and get the model parameters θ̂. Step II: Harmful samples identification is to compute the influence of perturbing the loss term of each training sample zi ∈ D on test risk using Eq. (2) and then use it to identify the harmful training samples from D. We denote the set of identified harmful training samples as D− and the set of remaining training instances as D+. The details of this step are provided in Section 3.2. Step III: Relabeling harmful samples via influence analysis is to apply the relabeling function to modify the label of each identified harmful training sample in D− and obtain the set of relabeled harmful training samples denoted as D′−. We introduce our relabeling function R and theoretically prove that updating the model’s parameters with new training set D̂ = D′− ∪D+ can achieve lower test risk over Q than simply discarding or downweighting D− in Section 3.3. Step IV: Model retraining is to retrain the model using D̂ till convergence to get the final optimal parameters θ̂ R.
3.2 HARMFUL SAMPLES IDENTIFICATION
In the second step, we compute D− ⊆ D which contains harmful training samples from the original training setD. Intuitively, a training sample is harmful to the model performance if removing it from the training set would reduce the test risk overQ. Based on influence functions, we can measure one sample’s influence on test risk without prohibitive leave-one-out training. According to Eq. (1)-(2), if we add a small perturbation i on the loss term of zi to change its weight, the change of test loss at a test sample zcj ∈ Q can be estimated as follows:
l(zcj , θ̂ i)− l(z c j , θ̂) ≈ i × Φθ(zi, zcj ), (5)
where Φθ(·, ·) is computed by Eq. (2). We then estimate the influence of perturbing zi on the whole test risk as follows:
l(Q, θ̂ i)− l(Q, θ̂) ≈ i × M∑ j=1 Φθ(zi, z c j ). (6)
Henceforth, we denote by Φθ(zi) = ∑M j=1 Φθ(zi, z c j ) the influence of perturbing the loss term of zi on the test risk over Q. It is worth mentioning that given i ∈ [− 1N , 0), Eq. (6) computes the influence of downweighting or discarding the training sample zi. We denote D− = {zi ∈ D | Φθ(zi) > 0} as harmful training samples. Similar to (Wang et al., 2020b), we assume that each training sample influences the test risk independently. We derive the Lemma 1.
Lemma 1. Discarding or downweighting the training samples in D− = {zi ∈ D | Φθ(zi) > 0} from D could lead to a model with lower test risk over Q:
L(Q, θ̂ )− L(Q, θ̂) ≈ − 1
N ∑ zi∈D− Φθ(zi) ≤ 0, (7)
where θ̂ denotes the optimal model parameters obtained by updating the model’s parameters with discarding or downweighting samples in D−.
Lemma 1 explains why influence-based resampling approaches have strong ability to resolve training biases and the proof of Lemma 1 is provided in Appendix B. In practice, to further tolerate the estimation error in Φθ(zi) which may result in the wrong identification of harmful training samples, we select D− = {zi ∈ D | Φθ(zi) > α} where the hyperparameter α controls the proportion of harmful samples to be relabeled eventually. We conduct the experiment to show the effects of α and the validation set in Section 5.3.
3.3 RELABELING HARMFUL SAMPLES VIA INFLUENCE ANALYSIS
In the third step, we propose a relabeling functionR and ensure the test risk would be reduced after training with the relabeled harmful samples D′−. To achieve this, we start from a special case (i.e., binary classification) and then extend to any classification tasks.
Relabeling harmful instances on binary classification. We start with binary classification where the relabeling function R is straightforward. Since the label set Y is {0, 1}, we have R(z) = 1− y for any z = (x, y) ∈ D. Recall that ϕ(xi, θ) denotes the model output for xi and the training loss of zi = (xi, yi) ∈ D is: li(θ) = −yi log(ϕ(xi, θ))− (1− yi) log(1− ϕ(xi, θ)) To compute the influence of relabeling a training sample zi in D, we first consider the case that yi = 1 and R(zi) = 0. The loss li(θ) at zi is changed from − log(ϕ(xi, θ)) to − log(1− ϕ(xi, θ)). Letting ziR = (xi, 1− yi) and w(zi, θ) = Oθli(ziR, θ)− Oθli(θ), we have:
w(zi, θ) = −Oθ log(1− ϕ(xi, θ)) + Oθ log(ϕ(xi, θ)) = −Oθli(θ)
1− ϕ(xi, θ) . (8)
According to Eq. (2),(4) and (8), the influence of relabeling zi on model’s prediction at test sample zcj is:
ηθR(zi, z c j ) = −Oθlj(θ̂)H−1θ̂ w(zi, θ̂) = −Oθlj(θ̂)H −1 θ̂
( − Oθli(θ̂)
1− ϕ(xi, θ̂) ) = −Φθ(zi, zcj ) 1− ϕ(xi, θ̂) . (9)
Similarly, when the label yi in zi is 0 and R(zi) = 1, we can derive the influence of relabeling zi at zcj as ηθR(zi, z c j ) = −Φθ(zi,zcj ) ϕ(xi,θ̂)
. Let θ̂ iRi denote the optimal parameters after relabeling zi. Similar to Eq. (6), we could extend the influence of relabeling zi at zcj to the whole test risk over Q as:
l(Q, θ̂ iRi)− l(Q, θ̂) ≈ i × M∑ j=1 ηθR(zi, z c j ). (10)
According to Eq. (9), the influence of relabeling training samples ηθR(zi, zcj ) is related to the influence of perturbing the loss term of training samples, i.e., Φθ(zi, zcj ). In this way, the change of test risk by relabeling zi (Eq. (10)) and that by perturbing zi (Eq. (6)) are interrelated. Then we derive the Theorem 1 and the proof can be found in Appendix B.
Theorem 1. In binary classification, let σ be the infimum of ϕ(xi,θ̂) 1−ϕ(xi,θ̂) and 1−ϕ(xi,θ̂) ϕ(xi,θ̂) and D− = {zi ∈ D | Φθ(zi) > 0}. Relabeling the samples in D− can achieve lower test risk than discarding or downweighting them from D, because the following inequality holds.
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0. (11)
Theorem 1 shows that relabeling the samples in D− could achieve lower test risk than simply discarding or downweighting D− from the training set for binary classification tasks. We then provide some intuitions on the benefits of relabeling harmful training samples. In the context of binary classification, if a training sample z in D− is noisy, our relabeling method corrects the label noise and improve training data quality; otherwise, z is very likely to be biased due to its negative impact on the test risk, and relabeling it might improve model’s robustness.
Relabeling harmful instances on any classification tasks. We now introduce a relabeling function R that can be used for any classification tasks. For a classification problem with K class labels (K ≥ 2), we represent each label y as a K-dimensional one-hot vector. The CE loss at zi is: li(θ) = − ∑K k=1 yi,k log(ϕk(xi, θ)). Intuitively, the proposed relabeling function R should satisfy the following principles:
• Consistency: R should produce a K-dimensional label vector: R(xi, yi) = y′i ∈ [0, 1]K . • Effectiveness: applying R over harmful training samples D− should guarantee the resul-
tant test risk is no larger than the one achieved by simply discarding them.
For the consistency principle, we require the new label y′i to be K-dimensional where y ′ ik describes the likelihood that xi takes the k-th class label, k ∈ [1,K]. Here we do not require ∑K k=1 y ′ ik to be one, because we focus on leveraging the identified harmful training samples towards better model performance instead of finding their truth labels.
Consider a training sample zi = (xi, yi) belonging to the m-th class (m ∈ [1,K]), i.e., yi,m = 1. Let R(xi, yi) = y′i, we propose the following relabeling function R that fulfills the above two principles:
y′i,k =
{ 0, if k = m
logϕk K−1 √ 1− ϕm, otherwise , (12)
where ϕ(xi, θ̂) = (ϕ1, · · · , ϕK) is the probability distribution over K classes produced by the model with parameters θ̂, i.e., ϕi ∈ [0, 1] and ∑K i=1 ϕi = 1. It is easy to check that our proposed relabeling function R in Eq. (12) satisfies the first principle. Interestingly, we can verify that for K = 2, we haveR(zi) = 1− yi. We further prove the effectiveness ofR using the Lemma 2. Lemma 2. When applying the relabeling functionR in Eq. (12) over a training sample zi ∈ D with a class label m, the CE loss li(θ) at zi is changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
It is interesting to verify the change in loss li(θ) acts as an extension of the binary classification. Similar to Theorem 1, we can drive the following theorem using Eq. (9)-(10).
Theorem 2. In multi-class classification, let ϕy(xi, θ̂) denote the probability that zi is classified as its truth class label by the model with the optimal parameters θ̂ on D, and σ be the infimum of ϕy(xi,θ̂) 1−ϕy(xi,θ̂) . Relabeling the samples in D− = {zi ∈ D | Φθ(zi) > 0} with R leads to a test risk lower than the one achieved by discarding or downweighting D−. Formally, we have:
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0. (13)
Theorem 2 shows that using our proposed relabeling R can further reduce the test risk than simply discarding or downweighting D− from the training set for any classification tasks. The detailed proofs of Lemma 2 and Theorem 2 are provided in Appendix B.
4 DISCUSSIONS
In this section, we provide numerical analysis on the superior performance of RDIA against other influence-based resampling approaches (Wang et al., 2018; 2020b). Then we discuss the extension of RDIA in exploiting training loss information. Numerical analysis. Consider a training point zi ∈ D− belonging to classm, whereD− is specified in Section 3.2. According to Eq. (13), if we use R to assign zi with a new label y′i = R(zi) instead of discarding or downweighting zi, the difference in the test risk over Q can be computed as:
g(zi) = li(Q, θ̂ iRi)− li(Q, θ̂ i) ≈ − 1 N × ϕm(xi, θ̂) 1− ϕm(xi, θ̂) Φθ(zi). (14)
zi ∈ D− means the Φθ(zi) in Eq. (14) is positive, and hence we have g(zi) < 0. If the model’s prediction for zi with the optimal parameters θ̂ is correct, ϕm(xi, θ̂) is the largest component in the vector ϕ(xi, θ̂). We consider zi as a more harmful sample because it has negative influence on the test loss yet the model has learnt some features from zi that connects xi to class m. In practice, zi is very likely to be a noisy or biased training sample. Interestingly, from Eq. (14), we can see that a small increase in ϕm(xi, θ̂) will lead to a rapid increase in g(zi). This indicates relabeling such more harmful training samples leads to significant performance gain. Extension of RDIA. In practice, due to the complexity of calculating the influence functions, identifying harmful samples via influence analysis could incur high computational cost, especially when
training complex models like deep neural networks. To address the problems, we further extend RDIA by using training loss to identify harmful samples for deep models, and we dub this extension as RDIA-LS. We empirically show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning, which spotlights the great scalability of our approach. The details of RDIA-LS are provided in Appendix F.
5 EXPERIMENTS
In this section, we conduct experiments to evaluate the effectiveness and robustness of our RDIA. We also perform the ablation study to show how hyperparameter α and the size of validation set affect the performance of RDIA. The visualization of identified harmful samples and comparison with other loss-based approaches are provided in Appendix D and Appendix G.
5.1 EXPERIMENTAL SETTINGS
Datasets. To verify the effectiveness of RDIA, we perform extensive experiments on ten public data sets from different domains, including NLP, CV, CTR, etc. Since all the datasets are clean, we build a noise transition matrix P = {Pij}K×K to verify the robustness of our proposed approaches for combating noisy labels, where K denotes the class number and Pij denotes the probability of a clean label i being flipped to a noisy label j. In our experiment, we use the noise ratio τ to determine the rate of how many labels are manually corrupted and each clean label has the same probability of being flipped to other classes, i.e., Pij = τK−1 . More details about the statistics of the datasets and Tr-Va-Te divisions are provided in Appendix C.
Comparison methods. We compared our proposed relabeling method RDIA with the following baselines, all of which are agnostic of specific model or data structure. (1) ERM: it means training a model with all the training data with the cross-entropy loss. (2) Random: it is a basic relabeling method that randomly selects and changes the label of training samples. (3) OptLR (Ting & Brochu, 2018): it is a weighted sampling method which assigns each training sample with a weight proportional to its impact on the change in model’s parameters ψθ. Specifically, the weight of zi is max{α,min{1, λψθ(zi)}}. We set α and λ to be 1/max{ψθ(zi)} and 1/max{Φθ(zi)}, respectively. (4) Dropout (Wang et al., 2018): it is an unweighted subsampling method which simply discards D− from the training set, i.e., removing all training data with negative influence on the test loss. (5) UIDS (Wang et al., 2020b): it it is an unweighted subsampling method which uses Linear sampling method or Sigmoid sampling method to resample the training data based on influence functions Φθ(zi). It is the best-performing method among all the existing influence-based methods.
We implemented all the comparison methods by using their published source codes in Pytorch and ran all the experiments on a server with 2 Intel Xeon 1.7GHz CPUs, 128 GB of RAM and a single NVIDIA 2080 Ti GPU. All the baselines are tuned with clean validation data for best model performance. To measure the performance of all the approaches, we followed (Wang et al., 2020b) and used the test loss as the metric since we aim to optimize the test loss via influence analysis.
Implementation details. For each of the ten datasets, we adopted logistic regression (convex optimization) as the binary classifier (for MNIST and CIFAR10, we randomly choose two classes to perform binary classification). As for multi-class classification, we implemented two deep models (non-convex optimization), LeNet (2 convolutional layers and 1 fully connected layers) and a CNN with 6 convolutional layers followed by 2 fully connected layers used in (Wang et al., 2019) on MNIST and CIFAR10. The hyperparameter α is also tuned in [0, 0.001, 0.002, ...,0.01] with the clean validation set for best performance. More detailed settings are provided in Appendix C.
5.2 EXPERIMENTAL RESULTS
Effectiveness of RDIA. To verify the effectiveness of RDIA , we conduct experiments on 10 clean datasets with three different models. The experiments are repeated 5 times and the averaged test loss with standard deviation results are reported in Table 1 and Table 2.We have the following important observations.
First, our proposed RDIA yields the lowest test loss over 9 out of 10 datasets using logistic regression. It outperforms ERM on all the datasets, which indicates the effectiveness of relabeling
training samples via influence functions to resolve training biases. Furthermore, RDIA outperforms the state-of-the-art resampling method UIDS on all the datasets except Avazu, which indicates the effectiveness of reusing harmful training samples via relabeling towards higher model performance.
Second, when training deep models, RDIA achieves the best test loss on MNIST+LeNet, MNIST+CNN, and CIFAR10+CNN, where it outperforms UIDS by a large margin. We observe LeNet performs much worse than CNN on CIFAR10 using the original training set (i.e., the results of ERM) due to its simple architecture. Note that the poor classification results for clean and unbiased training data would interfere the identification of true harmful training samples. Hence, RDIA performs similarly to Random which introduces random noises into the training set and the performance suffers. But we want to emphasize that when training a more suitable model (e.g., CNN) on CIFAR10, RDIA is more effective to improve model’s performance.
Third, Random performs worse than ERM on all the cases except on Adult. This indicates that randomly relabeling harmful training samples may easily inject noisy data that hurt model’s performance significantly. In contrast, our proposed relabeling function is effective to assign appropriate labels to harmful training samples that benefit the test loss.
Robustness to label noises. In order to investigate the robustness of RDIA to noise labels, we set the noise ratio τ from 0 to 0.8 to manually corrupt the labels in the four datasets from different domains, while the results on the other datasets have similar trends. Figure 2 reports the average test loss of all the influence-based approaches on four noisy datasets with different noise ratios. First, thanks to the high accuracy of estimating the influence functions on logistic regression, all influence-based approaches consistently outperform ERM, which indicates the effectiveness of using influence functions to identify noisy samples. Figure 2(a), 2(b) and 2(c) show that RDIA performs significantly better than the other influence-based approaches. As the noise ratio becomes larger, the test loss of all the other approaches increases while the test loss reported by RDIA is generally unchanged. This verifies the robustness of RDIAto high noise ratios. We surprisingly find that RDIA at 0.8 noise ratio achieves lower test loss than ERM at zero noise ratio. The reason might be that RDIA could leverage all the training samples and fix noisy labels properly to boost the per-
formance. Second, Figure 2(d) shows that when combating noisy labels for deep models, RDIA still suffers from the noisy labels like other baselines because the estimation of influence functions with deep models is not accurate enough to filter out all noisy labels. However, RDIA could still relabel the most negative samples to reduce the test loss.
5.3 ABLATION STUDY
Finally, we investigate the effect of different values of hyperparameters α and size of validation set on the performance of RDIA using MNIST with logistic regression. Hyparameter α. As discussed in Section 3.3, by varying α, we can derive the percentage of relabeled training data against the complete training set in RDIA. Table 3 provides the results of how many samples are relabeled and how test loss is changed with different values of α under different noise ratios. First, when noise ratio equals to 0, there are few biased samples in the training set. In this case, simply relabeling all the identified harmful samples will hurt the performance while using relatively larger α could report lower test loss. Second, when noise ratio is 0.8, RDIA achieves better performance with smaller α. This is reasonable since most of training samples involve label noises and increasing α facilitates the relabeling of noisy samples. Size of the validation set. As discussed in Section 3.3, we use the validation set instead of the test set to estimate the influence of each training sample. Table 4 shows the results of how the number of validation samples affects the model performance. We conduct the experiments under 40% noise rates and find the optimal hyperparameter α ∈ [0.0002, 0.01] to get the best results of RDIA. We have the following observations. 1) Using only 100 validation samples with RDIA achieves 35% lower test loss than ERM. 2) As the number of validation samples increases, RDIA significantly outperforms ERM, achieving up to 90% relative lower in test loss. The reason is that, as the number of validation sets increases, the validation set can gradually reflect the true distribution of test data. In this way, the estimated influence functions are accurate enough to filter out most harmful training samples for the test set. 3) RDIA consistently outperforms UIDS with different sizes of validation set, which empirically shows the effectiveness of our relabeling functionR.
6 CONCLUSION
In this paper, we propose to perform data relabeling based on influence functions to resolve the training bias issue. We develop a novel relabeling framework named RDIA, which reuses the information of harmful training samples identified by influence analysis towards higher model performance. We theoretically prove that RDIA can further reduce the test loss than simply discarding harmful training samples on any classification tasks using the cross-entropy loss function. Extensive experiments on real datasets verify the effectiveness of RDIA in enhancing model’s robustness and final performance, compared with various resampling and relabeling techniques.
Reproducibility: We clarify the assumptions in Section 2 and provide the complete proofs of Lemmas, Theorems in Appendix B. The statistics of datasets, the data processing, and the details of the experimental settings are described in Appendix C. Our code could be found in the https://github.com/Viperccc/RDIA.
ACKNOWLEDGMENT
This work is supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), the Tencent Wechat Rhino-Bird Focused Research Program, and SJTU Global Strategic Partnership Fund (2021 SJTU-HKUST). Yanyan Shen is the corresponding author of this paper.
Appendix
In this appendix, we first provide the algorithm of RDIA (Appendix A) and the complete proofs of the Lemmas and Theorems (Appendix B) in the main text. Then we give the details of the experimental settings (Appendix C), the extensive analysis of our approach (Appendix D), and the visualization of identified harmful samples (Appendix E). We then describe RDIA-LS, an extension of RDIA, to spotlight the scalability of our approach RDIA (Appendix F) and provide empirical results to show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning (Appendix G). Finally, we provide additional discussions about the existing data relabeling approaches (Appendix H)
A RDIA ALGORITHM
Algorithm 1: RDIA Input: Training model θ, biased training set D = {(xi, yi)}Ni=1, learning rate β, sample
selection ratio α such that 0 ≤ α ≤ 1, small and unbaised set Q = {(xcj , ycj)}Mi=1 1 Train the model θ with D until convergence to get θ̂; 2 Initialize D−, D+ = ∅ 3 for i ∈ [1, . . . , N ] do 4 Calculate the influence of the training sample zi = (xi, yi) on Q using Eq. (6): 5 if Φθ(zi) > α then 6 Relabel the identified harmful training samples with z′i ← R(zi); 7 D− ← D− ∪ {z′i} 8 else if Φθ(zi) < 0 then 9 D+ ← D+ ∪ {zi}
10 end 11 end 12 Obtain the new training set D̂ ← D− ∪D+ 13 Retrain the model with D̂ till convergence to get the final model parameters θ̂ R
B PROOFS FOR LEMMAS AND THEOREMS
B.1 PROOF OF LEMMA 1
Assume the perturbation i on zi is infinitesimal and the influence of each training sample on the test risk is independent.
Lemma 1. Discarding or downweighting the training samples in D− = {zi ∈ D | Φθ(zi) > 0} from D could lead to a model with lower test risk over Q:
L(Q, θ̂ )− L(Q, θ̂) ≈ − 1
N ∑ zi∈D− Φθ(zi) ≤ 0,
where θ̂ denotes the optimal model parameters obtained by updating the model’s parameters with discarding or downweighting samples in D−.
Proof. Recall that θ̂ i = arg min θ 1 N
∑N n=1 ln(θi)+ ili(θ). In this way, downweighting the training
sample zi inD− means setting i ∈ [− 1N , 0) (Noticed that i = − 1 N means discarding training sample zi). For convenience of analysis, we set all i equal to− 1N and have Φθ(zi) , ∑M j=1 Φθ(zi, z c j ). According to Eq. (6), we can estimate how the test risk is changed by discarding or downweighting
zi ∈ D− as follows:
L(Q, θ̂ )− L(Q, θ̂) = ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ i)− l(z c j , θ̂)
≈ ∑
zi∈D−
i × M∑ i=1 Φθ(zi, z c j )
= − 1 N ∑ zi∈D− Φθ(zi) ≤ 0
B.2 PROOF OF THEOREM 1
Theorem 1. In binary classification, let σ be the infimum of ϕ(xi,θ̂) 1−ϕ(xi,θ̂) and 1−ϕ(xi,θ̂) ϕ(xi,θ̂) and D− = {zi ∈ D | Φθ(zi) > 0}. Relabeling the samples in D− can achieve lower test risk than discarding or downweighting them from D, because the following inequality holds.
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0.
Proof. Based on Eq. (9), we have:
ηθR(zi, z c j ) Φθ(zi, zcj ) + 1 = 1− ϕ(xi, θ̂) −ϕ(xi, θ̂) , if yi = 0
−ϕ(xi, θ̂) 1− ϕ(xi, θ̂) , if yi = 1
It is worth mentioning that θ̂ iRi = arg min θ 1 N
∑N n=1 ln(θ) + ili(ziR, θ) − ili(θ). In this way,
relabeling the training sample zi in D− means setting i = 1N .
Similar to the proof of Lemma 1, according to Eq. (6) and Eq. (10), we have:
L(Q, θ̂ R)− L(Q, θ̂ ) =L(Q, θ̂ R)− L(Q, θ̂) + L(Q, θ̂)− L(Q, θ̂ )
= ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ iRi)− l(z c j , θ̂)− (l(zcj , θ̂ i)− l(z c j , θ̂))
≤ ∑
zi∈D−
( M∑ j=1 1 N ηθR(zi, z c j ) + 1 N M∑ j=1 Φθ(zi, z c j ))
= 1
N ∑ zi∈D− M∑ j=1 ( ηθ(zi, z c j ) Φθ(zi, zcj ) + 1)Φθ(zi, z c j )
≤− σ N ∑ zi∈D− Φθ(zi) ≤ 0
B.3 PROOF OF LEMMA 2
Lemma 2. When applying the relabeling functionR in Eq. (12) over a training sample zi ∈ D with a class label m, the CE loss li(θ) at zi is changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
Proof. Recall that the model’s prediction at xi is ϕ(xi, θ) = (ϕ1, ϕ2, ..., ϕK) and our relabeling function is:
y′i,k =
{ 0, if k = m
logϕk K−1√1− ϕm, otherwise
Here we assume the training example zi belongs to class m which means that yim = 1 and the other components in the one-hot vector yi are 0. The prime CE loss is − log(ϕm(xi, θ)). If we use our relabeling function to change the label of xi, the loss at zi will be:
l̃(zi, θ) = − ∑ k 6=m logϕk K−1 √ 1− ϕm × log(ϕk)
= − ∑ k 6=m log( K−1 √ 1− ϕm) log(ϕk) × log(ϕk)
= − ∑ k 6=m log(1− ϕm) K − 1
= − log(1− ϕm)
In this way, if we use relabeling functionR to change the label of example zi, the loss function will be changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
B.4 PROOF OF THEOREM 2
Theorem 2. In multi-class classification, let ϕy(xi, θ̂) denote the probability that zi is classified as its truth class label by the model with the optimal parameters θ̂ on D, and σ be the infimum of ϕy(xi,θ̂) 1−ϕy(xi,θ̂) . Relabeling the samples in D− = {zi ∈ D | Φθ(zi) > 0} with R leads to a test risk lower than the one achieved by discarding or downweighting D−. Formally, we have:
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0.
Proof. According to Lemma 2, Eq. (8)and Eq. (10), we can estimate the change of test loss at a test sample zcj ∈ Q caused by relabeling as follows:
li(z c j , θ̂ iRi)− li(z c j , θ̂) ≈ i × ηθR(zi, zcj )
= − 1 N
1
1− ϕy(xi, θ̂) Φθ(zi, z
c j )
Further, we can derive the following:
L(Q, θ̂ R)− L(Q, θ̂ ) =L(Q, θ̂ R)− L(Q, θ̂) + L(Q, θ̂)− L(Q, θ̂ )
= ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ iRi)− l(z c j , θ̂)− (l(zcj , θ̂ i)− l(z c j , θ̂))
≤ ∑
zi∈D−
( M∑ j=1 1 N ηθR(zi, z c j ) + 1 N M∑ j=1 Φθ(zi, z c j ))
= 1
N ∑ zi∈D− M∑ j=1 ( −1 1− ϕy(xi, θ̂) + 1)Φθ(zi, z c j )
≤− σ N ∑ zi∈D− Φθ(zi) ≤ 0
C EXPERIMENTAL SETTINGS
C.1 THE STATISTICS OF THE DATASETS
Table 5 shows the statistics of the datasets. We perform extensive experiments on public datasets from different domains to verify the effectiveness and robustness of our approach RDIA. All the datasets could be found in https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/.
C.2 TR-VA-TE DIVISIONS
We follow the Tr-Va-Te (Training/Validation/Test set divisions) setting in Wang et al. (2020b) to measure the generalization ability of our approach RDIA. Specifically, the influence of each training instance is estimated with the validation set using the validation loss and the model’s performance is tested by an additional out-of-sample test set which ensures we do not utilize any information of the test data.
When training logistic regression, we randomly pick up 30% samples from the training set as the validation set. For different influence-based approaches, the training/validation/test sets are kept the same for fair comparison. Both MNIST and CIFAR10 are 10-classes image classification datasets while logistic regression can only handle binary classification. On MNIST, we select the number 1 and 7 as positive and negative classes, respectively; On CIFAR10, we perform binary classification on cat and dog. For each image, we convert all the pixels into a flattened feature vector where each pixel is scaled by 1/255.
When training deep models, due to the high time complexity of estimating influence functions, we randomly exclude 100 samples (1%) from the test sets of MNIST and CIFAR10 as the respective validation sets, and the remaining data is used for testing.
C.3 IMPLEMENTATION DETAILS
We used the Newton-CG algorithm Martens (2010) to calculate the influence functions for the logistic regression model and applied Stochastic estimation Agarwal et al. (2017) for two deep models with 1000 clean data in the validation set. For logistic regression model, we select the regularization term C = 0.1 for fair comparison. We adopt the Adam optimizer with the learning rate of 0.001 to train the LeNet on MNIST. After calculating the influence functions and relabeling the identified harmful training samples using R, we reduce the learning rate to 10−5 and update the models until convergence. For CIFAR10, we use the SGD optimizer with the learning rate of 0.01 and the momentum of 0.9 to train the CNN. Then we change the learning rate to 0.001 and update the models based on the relabeled training set. Here we use different optimizers to train the models. This indicates RDIA is independent of the update strategy used for model training. The batch size is set to 64 in all the experiments and the hyperparameter α is tuned with the validation set for best performance.
D EXTENSIVE ANALYSIS OF RDIA
D.1 COMPLEXITY ANALYSIS
According to Koh & Liang (2017), the time complexity of calculating influence function for one training sample(i.e., Eq. 2) is O(NP ), where N and P stand for the sizes of training set and model’s parameter set, respectively. Note that the time complexity of relabeling one sample is O(N). Considering the complexity of calculating influence functions, the time cost of relabeling harmful samples is negligible which means our RDIA is as fast as any influence-based approaches.
D.2 RELATIONSHIP WITH PROPENSITY SCORE
Propensity score (Rosenbaum & Rubin, 1983; Bickel et al., 2009) is a well-studied technique to solve the distribution mismatch (also called covariate shift) problem where training and testing sets are sampled from two different distributions Ptrain(x, y) and Ptest(x, y), respectively. Its basic idea is to assign the propensity score to each training sample to make the test risk unbiased. Unlike the influence function calculated by measuring the change of test loss, propensity score is calculated directly by estimating the probability of each training sample belonging to the test distribution. If we could estimate the training and test distribution accurately, we could also use the propensity score to replace the influence function for identifying whether the training sample is harmful. We leave it for the future work.
E VISUALIZATION OF IDENTIFIED HARMFUL SAMPLES
We provide examples of harmful samples identified by influence functions to illustrate the effectiveness of influence analysis. We apply the logistic regression on MNIST (class 1 and 7) and CIFAR10 (class cat and dog). The influence functions are estimated by Newton-CG algorithm (Martens, 2010). We provide the three most harmful images which have the highest influence scores and share the same label with the test sample.
Figure 3(a) shows three identified harmful training images for each test image when there are no flipped labels in the training set. We can see that the identified harmful training samples are visually different from the original pictures, which disturbs the model’s prediction on the test image. That is, the presence of clean but harmful training images would damage the model’s performance.
Figure 3(b) shows the identified harmful training images when 50% labels of training data have been flipped. It is easy to see that the harmful images have corrupted labels, which confirms the effectiveness of applying influence analysis to locate noisy samples.
F RDIA-LS: A LOSS-BASED RELABELING APPROACH
F.1 LIMITATIONS OF RDIA
In the main paper, we have discussed a novel data relabeling framework RDIA via influence analysis. Based on the advantages of influence functions, RDIA is able to handle different types of training biases and is agnostic to a specific model or data type. However, the time complexity of estimating
Algorithm 2: RDIA-LS Input: Deep neural network θ, learning rate β, trainig set D, training epoch T , iteration N , sample
selection ratio ρ, underweight hyperparameter γ such that 0 ≤ γ ≤ 1. 1 for t ∈ [1, . . . , T ] do 2 Shuffle training set D; 3 for n ∈ [1, . . . , N ] do 4 Fetch n-th mini-batch D̄ from D; 5 Identify harmful samples using training loss: //Step I 6 D̄+ ← arg minD̄:|D̄|≥ρ|D̄| L(D̄, θ); 7 D̄− ← D̄ \ D̄+; 8 Relabel the identified harmful training samples: //Step II 9 D̄′− ←R(D̄−);
10 Obtain the loss as: LR = γL(D̄′−, θ) + (1− γ)L(D̄+, θ) 11 Update the model: θ ← θ − βOLR(D̄+, θ); //Step III 12 end 13 end
the influence of one training sample isO(NP ), whereN and P stand for the sizes of training set and model’s parameter set, respectively. This is relatively high for deep models which have thousands of parameters. Moreover, according to (Koh & Liang, 2017), the approximate estimation of influence functions on deep models may not be accurate and hence the second step of RDIA suffers from false positives and false negatives. When harmful samples account for the majority of the training set, e.g., high noise rates, it is difficult to filter most of harmful samples using the estimated influence.
F.2 RDIA-LS
To address the aforementioned limitations, we aim to extend RDIA to solve the specific problem. Here we focus on combating noisy labels with deep models since label noise is usually a primary root cause of training bias. We notice that training loss has been used to filter out training samples with corrupted labels in many previous works (Arpit et al., 2017; Han et al., 2018; Wei et al., 2020; Yu et al., 2019). It is worth mentioning that the noisy training samples identified by training loss are not equivalent to the harmful ones identified by influence functions because the latter are evaluated to have negative influence on the test performance. Nevertheless, since the selected high-loss training samples are very likely to involve corrupted labels, applying our relabeling function over them has the potential of correcting corrupted labels and benefiting the test performance. Besides, using training loss to identify harmful samples is more efficient as it avoids the estimation of influence functions. Hence, we propose to use training loss to identify noisy samples and develop a lossbased data relabeling approach named RDIA-LS, which can be viewed as an extension of RDIA for combating corrupted labels with deep models.
RDIA-LS consists of three steps: noisy samples identification, noisy samples relabeling and model updating. It shares the same last two steps with RDIA. The only difference between RDIA-LS and RDIA is that RDIA-LS uses training loss to identify noisy samples in each training epoch so that it does not need to train the model until convergence first. Specifically, given a mini-batch of training instances D̄ ⊆ D, RDIA-LS feeds forward all the samples in D̄ and then sorts them in an ascending order of their training losses. Following the prior works (He & Garcia, 2009), we regard the largeloss instances as noisy and the small-loss instances as clean. We use the rate of ρ to select the possibly clean training instances in D̄, i.e., D̄+ = arg minD̄:|D̄|≥ρ|D̄| L(D̄, θ). The remaining highloss training instances are treated as noisy samples, i.e., D̄− = D̄\D̄+. We follow (Han et al., 2020) to determine the value of the selection ratio ρ. After we have D̄−, we use our relabeling functionR to relabel the samples in D̄− and then update the model with D̄+ ∪ D̄′−. In our implementation, we simply modify the loss of the identified noisy samples based on Lemma 2 without performing actual relabeling. We use the hyperparameter γ ∈ [0, 1] to control the model tendency of learning from the clean instances and the relabeled noisy instances. The detailed procedure of RDIA-LS is provided in Algorithm 2.
We conduct the additional experiments in Appendix G to empirically show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning, which spotlights the great scalability of our approach RDIA.
G PERFORMANCE EVALUATION OF RDIA-LS
We now conduct the experiments to evaluate the effectiveness and efficiency of RDIA-LS using DNNs on MNIST, CIFAR10, CIFAR100 and Clothing1M. The first three datasets are clean and corrupted artificially. Clothing1M is a widely used real-world dataset with noisy labels (Patrini et al., 2017).
G.1 IMPLEMENTATION DETAILS
We apply the same network structures used in the main paper and use LeNet (2 convolutional layers and 1 fully connected layer) for MNIST, a CNN with 6 convolutional layers followed by 2 fully connected layers used in (Wang et al., 2019) for CIFAR10 and CIFAR100, and a 18-layer ResNet for Clothing1M. We follow the settings in (Han et al., 2018) for all the comparison methods. Specifically, for MNIST, CIFAR10 and CIFAR100, we use the Adam optimizer with the momentum of 0.9, initial learning rate of 0.001, and the batch size of 128. We run 200 epochs (T=200) in total and linearly decay the learning rate till zero from 80 to 200 epochs. As for Clothing1M, we use the Adam optimizer with the momentum 0.9 and set the batch size to be 64. We run 15 epochs in total and set learning rate to 8× 10−4, 5× 10−4 and 5× 10−5 for average five epochs. We set the ratio of small-loss instances as ρ = 1 −min{ tTk ∗ τ, τ} which is changed dynamically with the current training epoch t and Tk = 5 for Clothing1M and Tk = 10 for the other datasets. In this way, we can determine D− and D+ in each training epoch. If the noise ratio τ is not known in advance, we could use the method (Yu et al., 2018) to estimate τ . The hyperparameter γ is tuned in {0.05, 0.10, 0.15, · · · , 0.95} with the validation set for best performance. If there is no validation set, we could use training loss to select a clean subset from the training set as the validation set. Following loss-based approach (Han et al., 2020; 2018; Jiang et al., 2018), we use the test accuracy as the metric, i.e., (#correct predictions) / (#test instances).
G.2 COMPARISON METHODS
We compare our proposed RDIA-LS with the following baselines. S-model (Goldberger & BenReuven, 2017) and F-correction (Patrini et al., 2017) are the existing data relabeling approach which aims to estimate the noisy transition matrix to correct the noisy labels. The last three approaches are the state-of-the-art loss-based resampling approaches. (1) ERM: it trains one network with all the training data using cross-entropy loss. (2) S-model (Goldberger & Ben-Reuven, 2017): it uses an additional softmax layer to model the noise transition matrix to correct the model (3) Fcorrection (Patrini et al., 2017): it corrects the prediction by the noise transition matrix estimated by the other network. (4) Self-teaching (Jiang et al., 2018): it trains one network with only the selected small-loss instances D+. (5) Co-teaching (Han et al., 2018): it trains two networks simultaneously and improves self-teaching by updating the parameters of each network with the small-loss instancesD+ selected by the peer network. (6) SIGUA (Han et al., 2020): it trains one network with the selected small-loss instances D+ and high-loss instances D− via gradient descent and gradient ascent, respectively.
G.3 EXPERIMENTAL RESULTS
Comparison with the Baselines.
RDIA-LS is proposed to combat noisy labels for deep learning. In order to evaluate how RDIALS improves the robustness of deep models, we perform experiments on MNIST+LeNet, CIFAR10+CNN and CIFAR100+CNN with different noise ratios and the real-world noisy dataset Clothing1M+Resnet18. The average results of test accuracy are reported in Table 6, Table 7, Table 8 and Table 9 . We have the following observations. (1) RDIA-LS achieves the highest test accuracy in all the cases. When noise ratio is 0.2, the improvement of RDIA-LS is relatively small. This is reasonable as the performance gain of RDIA-LS obtained from utilizing noisy data is restricted due to the low noise ratio. When the noise ratio exceeds 0.4, RDIA-LS significantly outperforms the existing loss-based approaches, achieving up to 5% relative improvement in test accuracy. It indicates that RDIA-LS can still effectively reuse harmful training instances to improve model’s robustness under high noise ratios. (2) RDIA-LS consistently performs better than S-model, F-correction, and SIGUA, which implies that using R to relabel noisy training samples identified by training loss is more effective than modeling the noise transition matrix or performing gradient ascent with identified noisy training instances. (3) RDIA-LS outperforms all the baselines on the real-world noisy dataset Clothing1M, which demonstrates the effectiveness of applying RDIA-LS in practice.
Comparison with RDIA.
Table 10 reports the running time of harmful/noisy samples identification in RDIA and RDIA-LS. We exclude the results of LS on logistic regression since training loss can only be used to filter out noisy samples for training deep models. From the table, we can see that using influence function to identify harmful samples for logistic regression is efficient. However, when training deep models with millions of parameters, using training loss to filter out noisy samples is much more efficient.
RDIA-LS is an extension of RDIA to combat noisy samples with deep models. The aforementioned experimental results show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning. However, it is worth noticing that RDIA-LS relies on the small-loss trick that the samples which have the larger training loss may contain the corrupted labels. In this way, RDIA-LS is only suitable for the deep models against the corrupted labels and could fail in the situation where the small-loss trick does not hold while RDIA has no such constraint.
H ADDITIONAL DISCUSSION ON DATA RELABELING
Existing relabeling approaches (Goldberger & Ben-Reuven, 2017; Jiang et al., 2018; Lee et al., 2018) are proposed to combat noisy labels with DNNs. They focus on estimating the noise transition matrix to convert the corrupted labels to clean labels. However, current relabeling methods suffer from two limitations. First, they aim to find the true labels of the training samples, which means they can only deal with label noise. Second, they employ some additional structures to correct the labels, which are dependent of the specific model structures. For example, Goldberger et al. (Goldberger & Ben-Reuven, 2017) added an additional softmax layer to present the noise transition matrix and CleanNet (Lee et al., 2018) used the auto-encoder to update the corrupted labels. In contrast, we aim to develop a relabeling function based on influence analysis to change the labels of harmful samples towards better model performance. We do not require output labels to be one-hot vector since our objective is not to find the truth labels of training samples. Besides, we extend our approach to RDIA-LS to effectively combat noisy samples for training DNNs, which outperforms the existing data relabeling approaches (Goldberger & Ben-Reuven, 2017; Jiang et al., 2018).
DUTI (Zhang et al., 2018) is an effective data relabeling approach which could debug and correct the wrong labels in the training set. It uses a bi-level optimization scheme to recommend the most influential training samples for cleaning and suggest the possibly cleaned labels. The proposed relabeling function in DUTI is different from our approach. Specifically, the relabeling function in DUTI is trained by a bi-level optimization using the gradient of the validation loss while our proposed relabeling function has nothing to do with gradient, validation loss. | 1. What is the main contribution of the paper regarding training biases in machine learning models?
2. How does the proposed approach combine influence functions and data relabeling to mitigate training biases?
3. Can you explain why using influence analysis on the validation set rather than the test set may improve the consistency across the paper?
4. Would providing an algorithm for RDIA, such as done for RDIA-LS, help portray more details for further reproducibility?
5. How can the authors ensure that their solution will not negatively impact the generalization aspect, especially when dealing with sensitive attributes like race or gender? | Summary Of The Paper
Review | Summary Of The Paper
The authors present an approach and framework to mitigate training biases by combining influence functions and data relabeling. The idea behind training biases is that part of the data that is used to train the model does not accurately represent the real data distribution seen in the test set. Thus, having a mismatch between training and test data. This creates a generalization problem for the machine learning model.
Other authors have used different resampling approaches to try and address this problem, relying on the training loss and then relabeling the data; or by using influence functions and changing the weight of the harmful examples so that the effect on the test loss is lower. The current authors combine both approaches, and present a framework that relabels harmful training data based on influence functions (on the test set).
The results of their experiments show that they are able to reduce the test loss compared to other data resampling approaches.
Review
I believe the paper is very well-written and structured. I appreciate that the authors had taken time and consideration into writing an abstract and introduction that clearly motivates the problem in hand, gives enough background into the problem, and that clearly explains the solutions and the experimental results.
The paper seems to be solidly based in theoretical proofs of their methods, together with experimental results comparing it to some baselines plus the state-of-the-art approach. I liked that the limitations are clearly explained in the appendix.
All in all I think is a good paper.
Regarding weaknesses of the paper that I believe could help at improve the paper if addressed.
First, is not until the reader reaches section three that the reader realizes that the authors use influence analysis on the validation set rather than on the test set. I think that the authors should improve the consistency across the paper of using the term "test set". Throughout the introduction, abstract, background, and part of the methodology, the validation set is referred as the test set. Which then is clarified to be the validation set, since the test set is only used for a proper test evaluation (without altering the training set). I would suggest to either clarify that at the beginning, or simply use validation set instead of test set.
Algorithm for RDIA such as done for RDIA-LS (Algorithm 1) in the Appendix E. I believe this will help portray more details for further reproducibility.
Regarding reproducibility, the authors mention that the code is available in the Appendix. Do the authors mean the programming code? If so, I couldn't find a link there. A link with the source code would be beneficial for the same reasons as the point above.
My last and probably the most relevant concern is the following. Imagine we have an imbalanced dataset, and that the algorithm is not able to classify well that portion of the dataset with fewer examples. The problem in this case is that we don't have enough data of some particular group of instances, needing some sort of solution like data augmentation (for example). However, if we would apply the authors solution (or the other authors solution for that matter) we would be treating these examples as incorrect. In the authors case the positive aspect is that the examples won't be removed (as with some of the others solutions). But, how would relabeling those examples help the model learn about this specific category of instances? I am thinking on the lines of a dataset where we have sensitive attributes e.g race, gender. Where some of the people are less represented. From my perspective this solution might have a negative impact in the generalization aspect since the model might not really learn those people that might look different from the rest. Or it could be the case where it helps at mitigating this bias. This comment is more of an opening for a discussion with the authors, rather than "something to fix" . It would be great if the authors could just comment what they think on this matter during the discussion period. Since I am sure they have thought about this aspect, and I wouldn't want to have missed or misinterpreted some part of the paper. Just to clarify, my scoring hasn't been influenced by this last comment. |
ICLR | Title
Resolving Training Biases via Influence-based Data Relabeling
Abstract
The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on data resampling have employed influence functions to identify harmful training samples that will degrade model’s test performance. They have shown that discarding or downweighting the identified harmful training samples is an effective way to resolve training biases. In this work, we move one step forward and propose an influence-based relabeling framework named RDIA for reusing harmful training samples toward better model performance. To achieve this, we use influence functions to estimate how relabeling a training sample would affect model’s test performance and further develop a novel relabeling function R. We theoretically prove that applying R to relabel harmful training samples allows the model to achieve lower test loss than simply discarding them for any classification tasks using cross-entropy loss. Extensive experiments on ten real-world datasets demonstrate RDIA outperforms the state-of-the-art data resampling methods and improves model’s robustness against label noise.
1 INTRODUCTION
Training data plays an inevitably important role in delivering the model’s final performance. It has been well recognized that the training bias issue will compromise model performance to a large extent (Arpit et al., 2017). Specifically, there are two major scenarios where training biases show up. The first and most common scenario is that training samples involve corrupted labels that could be originated at possibly every step of the data lifecycle (Anderson & McGrew, 2017; Dolatshah et al., 2018; Pei et al., 2020; Yu & Qin, 2020). The second scenario is that the training and test sets are sampled from the respective distributions Ptrain(x, y) and Ptest(x, y), but Ptrain is different from Ptest (Guo et al., 2020; He & Garcia, 2009; Zou et al., 2019). Both corrupted labels and distribution mismatch will hurt the generalization ability of a trained model (Fang et al., 2020; Zhang et al., 2017; Chen et al., 2021). We generally refer to training samples with corrupted labels or those inducing distribution mismatch as harmful samples.
Data resampling is a widely used strategy to deal with harmful training samples. Existing resampling approaches (Chawla et al., 2002; Mahmoody et al., 2016; Malach & Shalev-Shwartz, 2017; Ren et al., 2018) propose to assign different weights to training samples, which aim to mitigate the negative impacts of harmful samples on model’s generalization ability. Among them, most resampling approaches (Arazo et al., 2019; Han et al., 2018; Li et al., 2020; Wang et al., 2020a) rely on training loss to identify harmful samples from the whole training set. They follow the insight that the samples with higher training losses are very likely to have corrupted labels, and hence it is often beneficial to downweight them during the process of model training. However, such loss-based resampling methods have two limitations. First, they are only able to deal with the training biases caused by training samples with corrupted labels (aka noisy samples). Second, the small-loss trick typically holds true for deep models but not for any predictive models (Zhang et al., 2017). To address the limitations, one recent work (Wang et al., 2020b) proposes a new resampling scheme based on influence functions (Cook & Weisberg, 1980). The idea is to estimate the influence of each training sample on model’s predictions over the test set. Any training samples that would cause an
increase in the test loss are considered as harmful and will be downweighted afterwards. It is worth mentioning that the influence functions have been proved to deal with two forms of training biases effectively, and it is agnostic to a specific model or data type (Koh & Liang, 2017; Koh et al., 2019).
Inspired by the success of influence-based data resampling, in this paper, we would like to ask the following question: what would happen if we relabel harmful training data based on influence analysis results? Our motivations on performing data relabeling via influence analysis are twofold. (i) Relabeling noisy samples is able to prevent the model from memorizing the corrupted labels. (ii) Relabeling clean but biased samples is helpful to improve model’s robustness to harmful samples. Despite the potential benefits of data relabeling, it is still challenging to develop an influence-based relabeling approach that has a theoretical guarantee on the model’s performance improvement after training with relabeled data.
To answer the question, we first follow (Koh et al., 2019) to measure the influence of each training sample on model’s predictions and identify the harmful training samples which would cause an increase in the test loss. Next, we investigate whether relabeling the identified harmful samples rather than discarding them can improve the test performance. To achieve this, we start from binary classification tasks where relabeling a training sample is to convert its binary label from y to 1− y. We theoretically prove that relabeling harmful training data via influence analysis can achieve lower test loss than simply discarding them for binary classification. Furthermore, we design a novel relabeling function R for multi-class classification tasks and prove that the advantage of relabeling the identified harmful samples using R in reducing model’s test loss still holds. Following the influence-based resampling approaches (Wang et al., 2018; Ting & Brochu, 2018; Ren et al., 2020; Wang et al., 2020b), we only use the test loss for theoretical analysis and empirically calculate influence function with a small but unbiased validation set by assuming the validation set is sampled from the same distribution as the test set. In this way, using the validation loss to calculate the influence function is an unbiased estimation of the true influence function. Otherwise, the problem may lie in the category of transfer learning which is beyond the scope of this work.
To summarize, this work makes the following contributions. First, we propose to combine influence functions with data relabeling for reducing training biases and we develop an end-to-end influencebased relabeling framework named RDIA that reuses harmful training samples toward better model performance. Second, we design a novel relabeling function R and theoretically prove that applying R over harmful training samples identified by influence functions is able to achieve lower test loss for any classification tasks using cross-entropy loss function. Third, we conduct extensive experiments on real datasets in different domains. The results demonstrate that (i) RDIA is effective in reusing harmful training samples towards higher model performance, surpassing the existing influence-based resampling approaches, and (ii) RDIA improves model’s robustness to label noise, outperforming the current resampling methods by large margins.
2 BACKGROUND
Let D = {(xi, yi) ∈ X × Y | 1 ≤ i ≤ N} be the training set which are sampled from Ptrain(x, y). Let zi = (xi, yi) where xi ∈ Rd and yi ∈ RK . Let ϕ(x, θ) be a model’s prediction for x, where θ ∈ Rp is the parameter set of the model. We denote the loss of sample zi by l(zi, θ) = L(yi, ϕ(xi, θ)) and use li(θ) to represent l(zi, θ). We consider the standard empirical risk minimization (ERM) as the optimization objective. Formally, the empirical risk over D is defined as: L(D; θ) = 1N ∑N i=1 li(θ). Since our relabeling function is dependent to the loss function, we focus on the most effective and versatile loss, i.e., Cross Entropy loss for any classification tasks.
Influence functions. Influence functions, stemming from Robust Statistics (Huber, 2004), have provided an efficient way to estimate how a small perturbation of a training sample would change the model’s predictions (Koh & Liang, 2017; Koh et al., 2019; Yu et al., 2020). Let θ̂ = arg minθ 1 N ∑N n=1 ln(θ) be the optimal model parameters on convergence. When upweighting a training sample zi on its loss term by an infinitesimal step i, we obtain the new optimal parameters θ̂ i on convergence as : θ̂ i = arg min
θ
1 N ∑N n=1 ln(θi)+ ili(θ). Based on influence functions (Cook
& Weisberg, 1980; Koh & Liang, 2017), we have the following closed-form expression to estimate
the change in model parameters when upweighting zi by i:
ψθ(zi) , dθ̂ i d i | i=0= −H −1 θ̂ Oθli(θ̂), (1)
whereHθ̂ , 1 N ∑N n=1 O 2 θln(θ̂) is the Hessian matrix and O 2 θln(θ) is the second derivative of the loss at training point zn with respect to θ. Using the chain rule, we can estimate the change of model’s prediction at a test data zcj sampled from the given test distribution Ptest (Koh & Liang, 2017) :
Φθ(zi, z c j ) ,
dlj(θ̂ i)
d i | i=0= −Oθlj(θ̂)H −1 θ̂ Oθli(θ̂). (2)
At a fine-grained level, we can measure the influence of perturbing training sample zi from (xi, yi) to (xi, yi + δ). Let ziδ = (xi, yi + δ) and the new loss li(ziδ, θ) = L(yi + δ, ϕ(xi, θ)). According to (Koh & Liang, 2017), the optimal parameters θ̂ iδi after performing perturbation on zi is θ̂ iδi = arg min
θ
1 N ∑N n=1 ln(θ) + ili(ziδ, θ) − ili(θ). This allows us to estimate the change in model
parameters after the fine-grained data perturbation using influence functions as:
dθ̂ iδi d i | i=0 = ψθ(ziδ)− ψθ(zi)
= −H−1 θ̂
(Oθli(ziδ, θ̂)− Oθli(θ̂)). (3)
Further, the influence of perturbing zi by ziδ on model’s prediction at test sample zcj is the following:
ηθδ(zi, z c j ) ,
dlj(θ̂ iδi)
d i | i=0
= −Oθlj(θ̂)H−1θ̂ (Oθli(ziδ, θ̂)− Oθli(θ̂)). (4)
It is important to notice that Eq. (4) holds for arbitrary δ when i is approaching 0. This provides the feasibility of measuring how relabeling a training samples could influence the model’s predictions.
Influence-based resampling approaches. Previous researches (Koh & Liang, 2017; Wang et al., 2020b) have shown that influence functions have strong ability to identify harmful samples from the whole training set, which is agnostic to the specific model or data structure. Inspired by this, many influence-based resampling approaches (Ting & Brochu, 2018; Wang et al., 2018; 2020b) proposed to discard or downweight the identified harmful samples to reduce the test loss. However, different from previous works which focus on estimating the influence of each training sample on the test performance using Eq. (1)-(2), we perform the fine-grained perturbation on a training sample’s label and evaluate its influence using Eq. (3)-(4). Further, our work tries to build an end-to-end influence-based relabeling framework to reuse the harmful samples with a theoretical guarantee on the final model performance for any classification tasks. To be specific, we demonstrate that harmful training instances after being relabeled properly do make contributions to improve the final model performance, which provides a novel viewpoint on handling biased training data.
3 METHODOLOGY
Assume we have Q = {(xcj , ycj) ∈ X × Y | 1 ≤ j ≤ M} sampled from the test distribution Ptest and our objective is to minimize the test risk L(Q; θ) = 1M ∑M j=1 l c j(θ). Due to the harmful training samples in D, the optimal θ̂ which minimizes the empirical risk over training set D may not be the best risk minimizer over Q. To solve this issue, we propose a novel data relabeling framework named RDIA which aims to identify and reuse harmful training instances towards better model performance. We design a relabeling function R that allows the model to achieve lower test risk after being trained with the relabeled harmful instances for any classification tasks. In what follows, we first give an overview of the RDIA framework. Then we describe the details of the major steps in RDIA and provide theoretical analysis on how the relabeled harmful samples are useful to further reduce the test risk. The algorithm of RDIA could be found in Appendix A
3.1 OVERVIEW OF RDIA
Figure 1 provides an overview of RDIA, which consists of four major steps: Model training, Harmful samples identification, Relabeling harmful samples via influence analysis and Model retraining.
Step I: Model training is to train a model based on the training set D until convergence and get the model parameters θ̂. Step II: Harmful samples identification is to compute the influence of perturbing the loss term of each training sample zi ∈ D on test risk using Eq. (2) and then use it to identify the harmful training samples from D. We denote the set of identified harmful training samples as D− and the set of remaining training instances as D+. The details of this step are provided in Section 3.2. Step III: Relabeling harmful samples via influence analysis is to apply the relabeling function to modify the label of each identified harmful training sample in D− and obtain the set of relabeled harmful training samples denoted as D′−. We introduce our relabeling function R and theoretically prove that updating the model’s parameters with new training set D̂ = D′− ∪D+ can achieve lower test risk over Q than simply discarding or downweighting D− in Section 3.3. Step IV: Model retraining is to retrain the model using D̂ till convergence to get the final optimal parameters θ̂ R.
3.2 HARMFUL SAMPLES IDENTIFICATION
In the second step, we compute D− ⊆ D which contains harmful training samples from the original training setD. Intuitively, a training sample is harmful to the model performance if removing it from the training set would reduce the test risk overQ. Based on influence functions, we can measure one sample’s influence on test risk without prohibitive leave-one-out training. According to Eq. (1)-(2), if we add a small perturbation i on the loss term of zi to change its weight, the change of test loss at a test sample zcj ∈ Q can be estimated as follows:
l(zcj , θ̂ i)− l(z c j , θ̂) ≈ i × Φθ(zi, zcj ), (5)
where Φθ(·, ·) is computed by Eq. (2). We then estimate the influence of perturbing zi on the whole test risk as follows:
l(Q, θ̂ i)− l(Q, θ̂) ≈ i × M∑ j=1 Φθ(zi, z c j ). (6)
Henceforth, we denote by Φθ(zi) = ∑M j=1 Φθ(zi, z c j ) the influence of perturbing the loss term of zi on the test risk over Q. It is worth mentioning that given i ∈ [− 1N , 0), Eq. (6) computes the influence of downweighting or discarding the training sample zi. We denote D− = {zi ∈ D | Φθ(zi) > 0} as harmful training samples. Similar to (Wang et al., 2020b), we assume that each training sample influences the test risk independently. We derive the Lemma 1.
Lemma 1. Discarding or downweighting the training samples in D− = {zi ∈ D | Φθ(zi) > 0} from D could lead to a model with lower test risk over Q:
L(Q, θ̂ )− L(Q, θ̂) ≈ − 1
N ∑ zi∈D− Φθ(zi) ≤ 0, (7)
where θ̂ denotes the optimal model parameters obtained by updating the model’s parameters with discarding or downweighting samples in D−.
Lemma 1 explains why influence-based resampling approaches have strong ability to resolve training biases and the proof of Lemma 1 is provided in Appendix B. In practice, to further tolerate the estimation error in Φθ(zi) which may result in the wrong identification of harmful training samples, we select D− = {zi ∈ D | Φθ(zi) > α} where the hyperparameter α controls the proportion of harmful samples to be relabeled eventually. We conduct the experiment to show the effects of α and the validation set in Section 5.3.
3.3 RELABELING HARMFUL SAMPLES VIA INFLUENCE ANALYSIS
In the third step, we propose a relabeling functionR and ensure the test risk would be reduced after training with the relabeled harmful samples D′−. To achieve this, we start from a special case (i.e., binary classification) and then extend to any classification tasks.
Relabeling harmful instances on binary classification. We start with binary classification where the relabeling function R is straightforward. Since the label set Y is {0, 1}, we have R(z) = 1− y for any z = (x, y) ∈ D. Recall that ϕ(xi, θ) denotes the model output for xi and the training loss of zi = (xi, yi) ∈ D is: li(θ) = −yi log(ϕ(xi, θ))− (1− yi) log(1− ϕ(xi, θ)) To compute the influence of relabeling a training sample zi in D, we first consider the case that yi = 1 and R(zi) = 0. The loss li(θ) at zi is changed from − log(ϕ(xi, θ)) to − log(1− ϕ(xi, θ)). Letting ziR = (xi, 1− yi) and w(zi, θ) = Oθli(ziR, θ)− Oθli(θ), we have:
w(zi, θ) = −Oθ log(1− ϕ(xi, θ)) + Oθ log(ϕ(xi, θ)) = −Oθli(θ)
1− ϕ(xi, θ) . (8)
According to Eq. (2),(4) and (8), the influence of relabeling zi on model’s prediction at test sample zcj is:
ηθR(zi, z c j ) = −Oθlj(θ̂)H−1θ̂ w(zi, θ̂) = −Oθlj(θ̂)H −1 θ̂
( − Oθli(θ̂)
1− ϕ(xi, θ̂) ) = −Φθ(zi, zcj ) 1− ϕ(xi, θ̂) . (9)
Similarly, when the label yi in zi is 0 and R(zi) = 1, we can derive the influence of relabeling zi at zcj as ηθR(zi, z c j ) = −Φθ(zi,zcj ) ϕ(xi,θ̂)
. Let θ̂ iRi denote the optimal parameters after relabeling zi. Similar to Eq. (6), we could extend the influence of relabeling zi at zcj to the whole test risk over Q as:
l(Q, θ̂ iRi)− l(Q, θ̂) ≈ i × M∑ j=1 ηθR(zi, z c j ). (10)
According to Eq. (9), the influence of relabeling training samples ηθR(zi, zcj ) is related to the influence of perturbing the loss term of training samples, i.e., Φθ(zi, zcj ). In this way, the change of test risk by relabeling zi (Eq. (10)) and that by perturbing zi (Eq. (6)) are interrelated. Then we derive the Theorem 1 and the proof can be found in Appendix B.
Theorem 1. In binary classification, let σ be the infimum of ϕ(xi,θ̂) 1−ϕ(xi,θ̂) and 1−ϕ(xi,θ̂) ϕ(xi,θ̂) and D− = {zi ∈ D | Φθ(zi) > 0}. Relabeling the samples in D− can achieve lower test risk than discarding or downweighting them from D, because the following inequality holds.
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0. (11)
Theorem 1 shows that relabeling the samples in D− could achieve lower test risk than simply discarding or downweighting D− from the training set for binary classification tasks. We then provide some intuitions on the benefits of relabeling harmful training samples. In the context of binary classification, if a training sample z in D− is noisy, our relabeling method corrects the label noise and improve training data quality; otherwise, z is very likely to be biased due to its negative impact on the test risk, and relabeling it might improve model’s robustness.
Relabeling harmful instances on any classification tasks. We now introduce a relabeling function R that can be used for any classification tasks. For a classification problem with K class labels (K ≥ 2), we represent each label y as a K-dimensional one-hot vector. The CE loss at zi is: li(θ) = − ∑K k=1 yi,k log(ϕk(xi, θ)). Intuitively, the proposed relabeling function R should satisfy the following principles:
• Consistency: R should produce a K-dimensional label vector: R(xi, yi) = y′i ∈ [0, 1]K . • Effectiveness: applying R over harmful training samples D− should guarantee the resul-
tant test risk is no larger than the one achieved by simply discarding them.
For the consistency principle, we require the new label y′i to be K-dimensional where y ′ ik describes the likelihood that xi takes the k-th class label, k ∈ [1,K]. Here we do not require ∑K k=1 y ′ ik to be one, because we focus on leveraging the identified harmful training samples towards better model performance instead of finding their truth labels.
Consider a training sample zi = (xi, yi) belonging to the m-th class (m ∈ [1,K]), i.e., yi,m = 1. Let R(xi, yi) = y′i, we propose the following relabeling function R that fulfills the above two principles:
y′i,k =
{ 0, if k = m
logϕk K−1 √ 1− ϕm, otherwise , (12)
where ϕ(xi, θ̂) = (ϕ1, · · · , ϕK) is the probability distribution over K classes produced by the model with parameters θ̂, i.e., ϕi ∈ [0, 1] and ∑K i=1 ϕi = 1. It is easy to check that our proposed relabeling function R in Eq. (12) satisfies the first principle. Interestingly, we can verify that for K = 2, we haveR(zi) = 1− yi. We further prove the effectiveness ofR using the Lemma 2. Lemma 2. When applying the relabeling functionR in Eq. (12) over a training sample zi ∈ D with a class label m, the CE loss li(θ) at zi is changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
It is interesting to verify the change in loss li(θ) acts as an extension of the binary classification. Similar to Theorem 1, we can drive the following theorem using Eq. (9)-(10).
Theorem 2. In multi-class classification, let ϕy(xi, θ̂) denote the probability that zi is classified as its truth class label by the model with the optimal parameters θ̂ on D, and σ be the infimum of ϕy(xi,θ̂) 1−ϕy(xi,θ̂) . Relabeling the samples in D− = {zi ∈ D | Φθ(zi) > 0} with R leads to a test risk lower than the one achieved by discarding or downweighting D−. Formally, we have:
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0. (13)
Theorem 2 shows that using our proposed relabeling R can further reduce the test risk than simply discarding or downweighting D− from the training set for any classification tasks. The detailed proofs of Lemma 2 and Theorem 2 are provided in Appendix B.
4 DISCUSSIONS
In this section, we provide numerical analysis on the superior performance of RDIA against other influence-based resampling approaches (Wang et al., 2018; 2020b). Then we discuss the extension of RDIA in exploiting training loss information. Numerical analysis. Consider a training point zi ∈ D− belonging to classm, whereD− is specified in Section 3.2. According to Eq. (13), if we use R to assign zi with a new label y′i = R(zi) instead of discarding or downweighting zi, the difference in the test risk over Q can be computed as:
g(zi) = li(Q, θ̂ iRi)− li(Q, θ̂ i) ≈ − 1 N × ϕm(xi, θ̂) 1− ϕm(xi, θ̂) Φθ(zi). (14)
zi ∈ D− means the Φθ(zi) in Eq. (14) is positive, and hence we have g(zi) < 0. If the model’s prediction for zi with the optimal parameters θ̂ is correct, ϕm(xi, θ̂) is the largest component in the vector ϕ(xi, θ̂). We consider zi as a more harmful sample because it has negative influence on the test loss yet the model has learnt some features from zi that connects xi to class m. In practice, zi is very likely to be a noisy or biased training sample. Interestingly, from Eq. (14), we can see that a small increase in ϕm(xi, θ̂) will lead to a rapid increase in g(zi). This indicates relabeling such more harmful training samples leads to significant performance gain. Extension of RDIA. In practice, due to the complexity of calculating the influence functions, identifying harmful samples via influence analysis could incur high computational cost, especially when
training complex models like deep neural networks. To address the problems, we further extend RDIA by using training loss to identify harmful samples for deep models, and we dub this extension as RDIA-LS. We empirically show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning, which spotlights the great scalability of our approach. The details of RDIA-LS are provided in Appendix F.
5 EXPERIMENTS
In this section, we conduct experiments to evaluate the effectiveness and robustness of our RDIA. We also perform the ablation study to show how hyperparameter α and the size of validation set affect the performance of RDIA. The visualization of identified harmful samples and comparison with other loss-based approaches are provided in Appendix D and Appendix G.
5.1 EXPERIMENTAL SETTINGS
Datasets. To verify the effectiveness of RDIA, we perform extensive experiments on ten public data sets from different domains, including NLP, CV, CTR, etc. Since all the datasets are clean, we build a noise transition matrix P = {Pij}K×K to verify the robustness of our proposed approaches for combating noisy labels, where K denotes the class number and Pij denotes the probability of a clean label i being flipped to a noisy label j. In our experiment, we use the noise ratio τ to determine the rate of how many labels are manually corrupted and each clean label has the same probability of being flipped to other classes, i.e., Pij = τK−1 . More details about the statistics of the datasets and Tr-Va-Te divisions are provided in Appendix C.
Comparison methods. We compared our proposed relabeling method RDIA with the following baselines, all of which are agnostic of specific model or data structure. (1) ERM: it means training a model with all the training data with the cross-entropy loss. (2) Random: it is a basic relabeling method that randomly selects and changes the label of training samples. (3) OptLR (Ting & Brochu, 2018): it is a weighted sampling method which assigns each training sample with a weight proportional to its impact on the change in model’s parameters ψθ. Specifically, the weight of zi is max{α,min{1, λψθ(zi)}}. We set α and λ to be 1/max{ψθ(zi)} and 1/max{Φθ(zi)}, respectively. (4) Dropout (Wang et al., 2018): it is an unweighted subsampling method which simply discards D− from the training set, i.e., removing all training data with negative influence on the test loss. (5) UIDS (Wang et al., 2020b): it it is an unweighted subsampling method which uses Linear sampling method or Sigmoid sampling method to resample the training data based on influence functions Φθ(zi). It is the best-performing method among all the existing influence-based methods.
We implemented all the comparison methods by using their published source codes in Pytorch and ran all the experiments on a server with 2 Intel Xeon 1.7GHz CPUs, 128 GB of RAM and a single NVIDIA 2080 Ti GPU. All the baselines are tuned with clean validation data for best model performance. To measure the performance of all the approaches, we followed (Wang et al., 2020b) and used the test loss as the metric since we aim to optimize the test loss via influence analysis.
Implementation details. For each of the ten datasets, we adopted logistic regression (convex optimization) as the binary classifier (for MNIST and CIFAR10, we randomly choose two classes to perform binary classification). As for multi-class classification, we implemented two deep models (non-convex optimization), LeNet (2 convolutional layers and 1 fully connected layers) and a CNN with 6 convolutional layers followed by 2 fully connected layers used in (Wang et al., 2019) on MNIST and CIFAR10. The hyperparameter α is also tuned in [0, 0.001, 0.002, ...,0.01] with the clean validation set for best performance. More detailed settings are provided in Appendix C.
5.2 EXPERIMENTAL RESULTS
Effectiveness of RDIA. To verify the effectiveness of RDIA , we conduct experiments on 10 clean datasets with three different models. The experiments are repeated 5 times and the averaged test loss with standard deviation results are reported in Table 1 and Table 2.We have the following important observations.
First, our proposed RDIA yields the lowest test loss over 9 out of 10 datasets using logistic regression. It outperforms ERM on all the datasets, which indicates the effectiveness of relabeling
training samples via influence functions to resolve training biases. Furthermore, RDIA outperforms the state-of-the-art resampling method UIDS on all the datasets except Avazu, which indicates the effectiveness of reusing harmful training samples via relabeling towards higher model performance.
Second, when training deep models, RDIA achieves the best test loss on MNIST+LeNet, MNIST+CNN, and CIFAR10+CNN, where it outperforms UIDS by a large margin. We observe LeNet performs much worse than CNN on CIFAR10 using the original training set (i.e., the results of ERM) due to its simple architecture. Note that the poor classification results for clean and unbiased training data would interfere the identification of true harmful training samples. Hence, RDIA performs similarly to Random which introduces random noises into the training set and the performance suffers. But we want to emphasize that when training a more suitable model (e.g., CNN) on CIFAR10, RDIA is more effective to improve model’s performance.
Third, Random performs worse than ERM on all the cases except on Adult. This indicates that randomly relabeling harmful training samples may easily inject noisy data that hurt model’s performance significantly. In contrast, our proposed relabeling function is effective to assign appropriate labels to harmful training samples that benefit the test loss.
Robustness to label noises. In order to investigate the robustness of RDIA to noise labels, we set the noise ratio τ from 0 to 0.8 to manually corrupt the labels in the four datasets from different domains, while the results on the other datasets have similar trends. Figure 2 reports the average test loss of all the influence-based approaches on four noisy datasets with different noise ratios. First, thanks to the high accuracy of estimating the influence functions on logistic regression, all influence-based approaches consistently outperform ERM, which indicates the effectiveness of using influence functions to identify noisy samples. Figure 2(a), 2(b) and 2(c) show that RDIA performs significantly better than the other influence-based approaches. As the noise ratio becomes larger, the test loss of all the other approaches increases while the test loss reported by RDIA is generally unchanged. This verifies the robustness of RDIAto high noise ratios. We surprisingly find that RDIA at 0.8 noise ratio achieves lower test loss than ERM at zero noise ratio. The reason might be that RDIA could leverage all the training samples and fix noisy labels properly to boost the per-
formance. Second, Figure 2(d) shows that when combating noisy labels for deep models, RDIA still suffers from the noisy labels like other baselines because the estimation of influence functions with deep models is not accurate enough to filter out all noisy labels. However, RDIA could still relabel the most negative samples to reduce the test loss.
5.3 ABLATION STUDY
Finally, we investigate the effect of different values of hyperparameters α and size of validation set on the performance of RDIA using MNIST with logistic regression. Hyparameter α. As discussed in Section 3.3, by varying α, we can derive the percentage of relabeled training data against the complete training set in RDIA. Table 3 provides the results of how many samples are relabeled and how test loss is changed with different values of α under different noise ratios. First, when noise ratio equals to 0, there are few biased samples in the training set. In this case, simply relabeling all the identified harmful samples will hurt the performance while using relatively larger α could report lower test loss. Second, when noise ratio is 0.8, RDIA achieves better performance with smaller α. This is reasonable since most of training samples involve label noises and increasing α facilitates the relabeling of noisy samples. Size of the validation set. As discussed in Section 3.3, we use the validation set instead of the test set to estimate the influence of each training sample. Table 4 shows the results of how the number of validation samples affects the model performance. We conduct the experiments under 40% noise rates and find the optimal hyperparameter α ∈ [0.0002, 0.01] to get the best results of RDIA. We have the following observations. 1) Using only 100 validation samples with RDIA achieves 35% lower test loss than ERM. 2) As the number of validation samples increases, RDIA significantly outperforms ERM, achieving up to 90% relative lower in test loss. The reason is that, as the number of validation sets increases, the validation set can gradually reflect the true distribution of test data. In this way, the estimated influence functions are accurate enough to filter out most harmful training samples for the test set. 3) RDIA consistently outperforms UIDS with different sizes of validation set, which empirically shows the effectiveness of our relabeling functionR.
6 CONCLUSION
In this paper, we propose to perform data relabeling based on influence functions to resolve the training bias issue. We develop a novel relabeling framework named RDIA, which reuses the information of harmful training samples identified by influence analysis towards higher model performance. We theoretically prove that RDIA can further reduce the test loss than simply discarding harmful training samples on any classification tasks using the cross-entropy loss function. Extensive experiments on real datasets verify the effectiveness of RDIA in enhancing model’s robustness and final performance, compared with various resampling and relabeling techniques.
Reproducibility: We clarify the assumptions in Section 2 and provide the complete proofs of Lemmas, Theorems in Appendix B. The statistics of datasets, the data processing, and the details of the experimental settings are described in Appendix C. Our code could be found in the https://github.com/Viperccc/RDIA.
ACKNOWLEDGMENT
This work is supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), the Tencent Wechat Rhino-Bird Focused Research Program, and SJTU Global Strategic Partnership Fund (2021 SJTU-HKUST). Yanyan Shen is the corresponding author of this paper.
Appendix
In this appendix, we first provide the algorithm of RDIA (Appendix A) and the complete proofs of the Lemmas and Theorems (Appendix B) in the main text. Then we give the details of the experimental settings (Appendix C), the extensive analysis of our approach (Appendix D), and the visualization of identified harmful samples (Appendix E). We then describe RDIA-LS, an extension of RDIA, to spotlight the scalability of our approach RDIA (Appendix F) and provide empirical results to show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning (Appendix G). Finally, we provide additional discussions about the existing data relabeling approaches (Appendix H)
A RDIA ALGORITHM
Algorithm 1: RDIA Input: Training model θ, biased training set D = {(xi, yi)}Ni=1, learning rate β, sample
selection ratio α such that 0 ≤ α ≤ 1, small and unbaised set Q = {(xcj , ycj)}Mi=1 1 Train the model θ with D until convergence to get θ̂; 2 Initialize D−, D+ = ∅ 3 for i ∈ [1, . . . , N ] do 4 Calculate the influence of the training sample zi = (xi, yi) on Q using Eq. (6): 5 if Φθ(zi) > α then 6 Relabel the identified harmful training samples with z′i ← R(zi); 7 D− ← D− ∪ {z′i} 8 else if Φθ(zi) < 0 then 9 D+ ← D+ ∪ {zi}
10 end 11 end 12 Obtain the new training set D̂ ← D− ∪D+ 13 Retrain the model with D̂ till convergence to get the final model parameters θ̂ R
B PROOFS FOR LEMMAS AND THEOREMS
B.1 PROOF OF LEMMA 1
Assume the perturbation i on zi is infinitesimal and the influence of each training sample on the test risk is independent.
Lemma 1. Discarding or downweighting the training samples in D− = {zi ∈ D | Φθ(zi) > 0} from D could lead to a model with lower test risk over Q:
L(Q, θ̂ )− L(Q, θ̂) ≈ − 1
N ∑ zi∈D− Φθ(zi) ≤ 0,
where θ̂ denotes the optimal model parameters obtained by updating the model’s parameters with discarding or downweighting samples in D−.
Proof. Recall that θ̂ i = arg min θ 1 N
∑N n=1 ln(θi)+ ili(θ). In this way, downweighting the training
sample zi inD− means setting i ∈ [− 1N , 0) (Noticed that i = − 1 N means discarding training sample zi). For convenience of analysis, we set all i equal to− 1N and have Φθ(zi) , ∑M j=1 Φθ(zi, z c j ). According to Eq. (6), we can estimate how the test risk is changed by discarding or downweighting
zi ∈ D− as follows:
L(Q, θ̂ )− L(Q, θ̂) = ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ i)− l(z c j , θ̂)
≈ ∑
zi∈D−
i × M∑ i=1 Φθ(zi, z c j )
= − 1 N ∑ zi∈D− Φθ(zi) ≤ 0
B.2 PROOF OF THEOREM 1
Theorem 1. In binary classification, let σ be the infimum of ϕ(xi,θ̂) 1−ϕ(xi,θ̂) and 1−ϕ(xi,θ̂) ϕ(xi,θ̂) and D− = {zi ∈ D | Φθ(zi) > 0}. Relabeling the samples in D− can achieve lower test risk than discarding or downweighting them from D, because the following inequality holds.
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0.
Proof. Based on Eq. (9), we have:
ηθR(zi, z c j ) Φθ(zi, zcj ) + 1 = 1− ϕ(xi, θ̂) −ϕ(xi, θ̂) , if yi = 0
−ϕ(xi, θ̂) 1− ϕ(xi, θ̂) , if yi = 1
It is worth mentioning that θ̂ iRi = arg min θ 1 N
∑N n=1 ln(θ) + ili(ziR, θ) − ili(θ). In this way,
relabeling the training sample zi in D− means setting i = 1N .
Similar to the proof of Lemma 1, according to Eq. (6) and Eq. (10), we have:
L(Q, θ̂ R)− L(Q, θ̂ ) =L(Q, θ̂ R)− L(Q, θ̂) + L(Q, θ̂)− L(Q, θ̂ )
= ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ iRi)− l(z c j , θ̂)− (l(zcj , θ̂ i)− l(z c j , θ̂))
≤ ∑
zi∈D−
( M∑ j=1 1 N ηθR(zi, z c j ) + 1 N M∑ j=1 Φθ(zi, z c j ))
= 1
N ∑ zi∈D− M∑ j=1 ( ηθ(zi, z c j ) Φθ(zi, zcj ) + 1)Φθ(zi, z c j )
≤− σ N ∑ zi∈D− Φθ(zi) ≤ 0
B.3 PROOF OF LEMMA 2
Lemma 2. When applying the relabeling functionR in Eq. (12) over a training sample zi ∈ D with a class label m, the CE loss li(θ) at zi is changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
Proof. Recall that the model’s prediction at xi is ϕ(xi, θ) = (ϕ1, ϕ2, ..., ϕK) and our relabeling function is:
y′i,k =
{ 0, if k = m
logϕk K−1√1− ϕm, otherwise
Here we assume the training example zi belongs to class m which means that yim = 1 and the other components in the one-hot vector yi are 0. The prime CE loss is − log(ϕm(xi, θ)). If we use our relabeling function to change the label of xi, the loss at zi will be:
l̃(zi, θ) = − ∑ k 6=m logϕk K−1 √ 1− ϕm × log(ϕk)
= − ∑ k 6=m log( K−1 √ 1− ϕm) log(ϕk) × log(ϕk)
= − ∑ k 6=m log(1− ϕm) K − 1
= − log(1− ϕm)
In this way, if we use relabeling functionR to change the label of example zi, the loss function will be changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
B.4 PROOF OF THEOREM 2
Theorem 2. In multi-class classification, let ϕy(xi, θ̂) denote the probability that zi is classified as its truth class label by the model with the optimal parameters θ̂ on D, and σ be the infimum of ϕy(xi,θ̂) 1−ϕy(xi,θ̂) . Relabeling the samples in D− = {zi ∈ D | Φθ(zi) > 0} with R leads to a test risk lower than the one achieved by discarding or downweighting D−. Formally, we have:
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0.
Proof. According to Lemma 2, Eq. (8)and Eq. (10), we can estimate the change of test loss at a test sample zcj ∈ Q caused by relabeling as follows:
li(z c j , θ̂ iRi)− li(z c j , θ̂) ≈ i × ηθR(zi, zcj )
= − 1 N
1
1− ϕy(xi, θ̂) Φθ(zi, z
c j )
Further, we can derive the following:
L(Q, θ̂ R)− L(Q, θ̂ ) =L(Q, θ̂ R)− L(Q, θ̂) + L(Q, θ̂)− L(Q, θ̂ )
= ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ iRi)− l(z c j , θ̂)− (l(zcj , θ̂ i)− l(z c j , θ̂))
≤ ∑
zi∈D−
( M∑ j=1 1 N ηθR(zi, z c j ) + 1 N M∑ j=1 Φθ(zi, z c j ))
= 1
N ∑ zi∈D− M∑ j=1 ( −1 1− ϕy(xi, θ̂) + 1)Φθ(zi, z c j )
≤− σ N ∑ zi∈D− Φθ(zi) ≤ 0
C EXPERIMENTAL SETTINGS
C.1 THE STATISTICS OF THE DATASETS
Table 5 shows the statistics of the datasets. We perform extensive experiments on public datasets from different domains to verify the effectiveness and robustness of our approach RDIA. All the datasets could be found in https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/.
C.2 TR-VA-TE DIVISIONS
We follow the Tr-Va-Te (Training/Validation/Test set divisions) setting in Wang et al. (2020b) to measure the generalization ability of our approach RDIA. Specifically, the influence of each training instance is estimated with the validation set using the validation loss and the model’s performance is tested by an additional out-of-sample test set which ensures we do not utilize any information of the test data.
When training logistic regression, we randomly pick up 30% samples from the training set as the validation set. For different influence-based approaches, the training/validation/test sets are kept the same for fair comparison. Both MNIST and CIFAR10 are 10-classes image classification datasets while logistic regression can only handle binary classification. On MNIST, we select the number 1 and 7 as positive and negative classes, respectively; On CIFAR10, we perform binary classification on cat and dog. For each image, we convert all the pixels into a flattened feature vector where each pixel is scaled by 1/255.
When training deep models, due to the high time complexity of estimating influence functions, we randomly exclude 100 samples (1%) from the test sets of MNIST and CIFAR10 as the respective validation sets, and the remaining data is used for testing.
C.3 IMPLEMENTATION DETAILS
We used the Newton-CG algorithm Martens (2010) to calculate the influence functions for the logistic regression model and applied Stochastic estimation Agarwal et al. (2017) for two deep models with 1000 clean data in the validation set. For logistic regression model, we select the regularization term C = 0.1 for fair comparison. We adopt the Adam optimizer with the learning rate of 0.001 to train the LeNet on MNIST. After calculating the influence functions and relabeling the identified harmful training samples using R, we reduce the learning rate to 10−5 and update the models until convergence. For CIFAR10, we use the SGD optimizer with the learning rate of 0.01 and the momentum of 0.9 to train the CNN. Then we change the learning rate to 0.001 and update the models based on the relabeled training set. Here we use different optimizers to train the models. This indicates RDIA is independent of the update strategy used for model training. The batch size is set to 64 in all the experiments and the hyperparameter α is tuned with the validation set for best performance.
D EXTENSIVE ANALYSIS OF RDIA
D.1 COMPLEXITY ANALYSIS
According to Koh & Liang (2017), the time complexity of calculating influence function for one training sample(i.e., Eq. 2) is O(NP ), where N and P stand for the sizes of training set and model’s parameter set, respectively. Note that the time complexity of relabeling one sample is O(N). Considering the complexity of calculating influence functions, the time cost of relabeling harmful samples is negligible which means our RDIA is as fast as any influence-based approaches.
D.2 RELATIONSHIP WITH PROPENSITY SCORE
Propensity score (Rosenbaum & Rubin, 1983; Bickel et al., 2009) is a well-studied technique to solve the distribution mismatch (also called covariate shift) problem where training and testing sets are sampled from two different distributions Ptrain(x, y) and Ptest(x, y), respectively. Its basic idea is to assign the propensity score to each training sample to make the test risk unbiased. Unlike the influence function calculated by measuring the change of test loss, propensity score is calculated directly by estimating the probability of each training sample belonging to the test distribution. If we could estimate the training and test distribution accurately, we could also use the propensity score to replace the influence function for identifying whether the training sample is harmful. We leave it for the future work.
E VISUALIZATION OF IDENTIFIED HARMFUL SAMPLES
We provide examples of harmful samples identified by influence functions to illustrate the effectiveness of influence analysis. We apply the logistic regression on MNIST (class 1 and 7) and CIFAR10 (class cat and dog). The influence functions are estimated by Newton-CG algorithm (Martens, 2010). We provide the three most harmful images which have the highest influence scores and share the same label with the test sample.
Figure 3(a) shows three identified harmful training images for each test image when there are no flipped labels in the training set. We can see that the identified harmful training samples are visually different from the original pictures, which disturbs the model’s prediction on the test image. That is, the presence of clean but harmful training images would damage the model’s performance.
Figure 3(b) shows the identified harmful training images when 50% labels of training data have been flipped. It is easy to see that the harmful images have corrupted labels, which confirms the effectiveness of applying influence analysis to locate noisy samples.
F RDIA-LS: A LOSS-BASED RELABELING APPROACH
F.1 LIMITATIONS OF RDIA
In the main paper, we have discussed a novel data relabeling framework RDIA via influence analysis. Based on the advantages of influence functions, RDIA is able to handle different types of training biases and is agnostic to a specific model or data type. However, the time complexity of estimating
Algorithm 2: RDIA-LS Input: Deep neural network θ, learning rate β, trainig set D, training epoch T , iteration N , sample
selection ratio ρ, underweight hyperparameter γ such that 0 ≤ γ ≤ 1. 1 for t ∈ [1, . . . , T ] do 2 Shuffle training set D; 3 for n ∈ [1, . . . , N ] do 4 Fetch n-th mini-batch D̄ from D; 5 Identify harmful samples using training loss: //Step I 6 D̄+ ← arg minD̄:|D̄|≥ρ|D̄| L(D̄, θ); 7 D̄− ← D̄ \ D̄+; 8 Relabel the identified harmful training samples: //Step II 9 D̄′− ←R(D̄−);
10 Obtain the loss as: LR = γL(D̄′−, θ) + (1− γ)L(D̄+, θ) 11 Update the model: θ ← θ − βOLR(D̄+, θ); //Step III 12 end 13 end
the influence of one training sample isO(NP ), whereN and P stand for the sizes of training set and model’s parameter set, respectively. This is relatively high for deep models which have thousands of parameters. Moreover, according to (Koh & Liang, 2017), the approximate estimation of influence functions on deep models may not be accurate and hence the second step of RDIA suffers from false positives and false negatives. When harmful samples account for the majority of the training set, e.g., high noise rates, it is difficult to filter most of harmful samples using the estimated influence.
F.2 RDIA-LS
To address the aforementioned limitations, we aim to extend RDIA to solve the specific problem. Here we focus on combating noisy labels with deep models since label noise is usually a primary root cause of training bias. We notice that training loss has been used to filter out training samples with corrupted labels in many previous works (Arpit et al., 2017; Han et al., 2018; Wei et al., 2020; Yu et al., 2019). It is worth mentioning that the noisy training samples identified by training loss are not equivalent to the harmful ones identified by influence functions because the latter are evaluated to have negative influence on the test performance. Nevertheless, since the selected high-loss training samples are very likely to involve corrupted labels, applying our relabeling function over them has the potential of correcting corrupted labels and benefiting the test performance. Besides, using training loss to identify harmful samples is more efficient as it avoids the estimation of influence functions. Hence, we propose to use training loss to identify noisy samples and develop a lossbased data relabeling approach named RDIA-LS, which can be viewed as an extension of RDIA for combating corrupted labels with deep models.
RDIA-LS consists of three steps: noisy samples identification, noisy samples relabeling and model updating. It shares the same last two steps with RDIA. The only difference between RDIA-LS and RDIA is that RDIA-LS uses training loss to identify noisy samples in each training epoch so that it does not need to train the model until convergence first. Specifically, given a mini-batch of training instances D̄ ⊆ D, RDIA-LS feeds forward all the samples in D̄ and then sorts them in an ascending order of their training losses. Following the prior works (He & Garcia, 2009), we regard the largeloss instances as noisy and the small-loss instances as clean. We use the rate of ρ to select the possibly clean training instances in D̄, i.e., D̄+ = arg minD̄:|D̄|≥ρ|D̄| L(D̄, θ). The remaining highloss training instances are treated as noisy samples, i.e., D̄− = D̄\D̄+. We follow (Han et al., 2020) to determine the value of the selection ratio ρ. After we have D̄−, we use our relabeling functionR to relabel the samples in D̄− and then update the model with D̄+ ∪ D̄′−. In our implementation, we simply modify the loss of the identified noisy samples based on Lemma 2 without performing actual relabeling. We use the hyperparameter γ ∈ [0, 1] to control the model tendency of learning from the clean instances and the relabeled noisy instances. The detailed procedure of RDIA-LS is provided in Algorithm 2.
We conduct the additional experiments in Appendix G to empirically show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning, which spotlights the great scalability of our approach RDIA.
G PERFORMANCE EVALUATION OF RDIA-LS
We now conduct the experiments to evaluate the effectiveness and efficiency of RDIA-LS using DNNs on MNIST, CIFAR10, CIFAR100 and Clothing1M. The first three datasets are clean and corrupted artificially. Clothing1M is a widely used real-world dataset with noisy labels (Patrini et al., 2017).
G.1 IMPLEMENTATION DETAILS
We apply the same network structures used in the main paper and use LeNet (2 convolutional layers and 1 fully connected layer) for MNIST, a CNN with 6 convolutional layers followed by 2 fully connected layers used in (Wang et al., 2019) for CIFAR10 and CIFAR100, and a 18-layer ResNet for Clothing1M. We follow the settings in (Han et al., 2018) for all the comparison methods. Specifically, for MNIST, CIFAR10 and CIFAR100, we use the Adam optimizer with the momentum of 0.9, initial learning rate of 0.001, and the batch size of 128. We run 200 epochs (T=200) in total and linearly decay the learning rate till zero from 80 to 200 epochs. As for Clothing1M, we use the Adam optimizer with the momentum 0.9 and set the batch size to be 64. We run 15 epochs in total and set learning rate to 8× 10−4, 5× 10−4 and 5× 10−5 for average five epochs. We set the ratio of small-loss instances as ρ = 1 −min{ tTk ∗ τ, τ} which is changed dynamically with the current training epoch t and Tk = 5 for Clothing1M and Tk = 10 for the other datasets. In this way, we can determine D− and D+ in each training epoch. If the noise ratio τ is not known in advance, we could use the method (Yu et al., 2018) to estimate τ . The hyperparameter γ is tuned in {0.05, 0.10, 0.15, · · · , 0.95} with the validation set for best performance. If there is no validation set, we could use training loss to select a clean subset from the training set as the validation set. Following loss-based approach (Han et al., 2020; 2018; Jiang et al., 2018), we use the test accuracy as the metric, i.e., (#correct predictions) / (#test instances).
G.2 COMPARISON METHODS
We compare our proposed RDIA-LS with the following baselines. S-model (Goldberger & BenReuven, 2017) and F-correction (Patrini et al., 2017) are the existing data relabeling approach which aims to estimate the noisy transition matrix to correct the noisy labels. The last three approaches are the state-of-the-art loss-based resampling approaches. (1) ERM: it trains one network with all the training data using cross-entropy loss. (2) S-model (Goldberger & Ben-Reuven, 2017): it uses an additional softmax layer to model the noise transition matrix to correct the model (3) Fcorrection (Patrini et al., 2017): it corrects the prediction by the noise transition matrix estimated by the other network. (4) Self-teaching (Jiang et al., 2018): it trains one network with only the selected small-loss instances D+. (5) Co-teaching (Han et al., 2018): it trains two networks simultaneously and improves self-teaching by updating the parameters of each network with the small-loss instancesD+ selected by the peer network. (6) SIGUA (Han et al., 2020): it trains one network with the selected small-loss instances D+ and high-loss instances D− via gradient descent and gradient ascent, respectively.
G.3 EXPERIMENTAL RESULTS
Comparison with the Baselines.
RDIA-LS is proposed to combat noisy labels for deep learning. In order to evaluate how RDIALS improves the robustness of deep models, we perform experiments on MNIST+LeNet, CIFAR10+CNN and CIFAR100+CNN with different noise ratios and the real-world noisy dataset Clothing1M+Resnet18. The average results of test accuracy are reported in Table 6, Table 7, Table 8 and Table 9 . We have the following observations. (1) RDIA-LS achieves the highest test accuracy in all the cases. When noise ratio is 0.2, the improvement of RDIA-LS is relatively small. This is reasonable as the performance gain of RDIA-LS obtained from utilizing noisy data is restricted due to the low noise ratio. When the noise ratio exceeds 0.4, RDIA-LS significantly outperforms the existing loss-based approaches, achieving up to 5% relative improvement in test accuracy. It indicates that RDIA-LS can still effectively reuse harmful training instances to improve model’s robustness under high noise ratios. (2) RDIA-LS consistently performs better than S-model, F-correction, and SIGUA, which implies that using R to relabel noisy training samples identified by training loss is more effective than modeling the noise transition matrix or performing gradient ascent with identified noisy training instances. (3) RDIA-LS outperforms all the baselines on the real-world noisy dataset Clothing1M, which demonstrates the effectiveness of applying RDIA-LS in practice.
Comparison with RDIA.
Table 10 reports the running time of harmful/noisy samples identification in RDIA and RDIA-LS. We exclude the results of LS on logistic regression since training loss can only be used to filter out noisy samples for training deep models. From the table, we can see that using influence function to identify harmful samples for logistic regression is efficient. However, when training deep models with millions of parameters, using training loss to filter out noisy samples is much more efficient.
RDIA-LS is an extension of RDIA to combat noisy samples with deep models. The aforementioned experimental results show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning. However, it is worth noticing that RDIA-LS relies on the small-loss trick that the samples which have the larger training loss may contain the corrupted labels. In this way, RDIA-LS is only suitable for the deep models against the corrupted labels and could fail in the situation where the small-loss trick does not hold while RDIA has no such constraint.
H ADDITIONAL DISCUSSION ON DATA RELABELING
Existing relabeling approaches (Goldberger & Ben-Reuven, 2017; Jiang et al., 2018; Lee et al., 2018) are proposed to combat noisy labels with DNNs. They focus on estimating the noise transition matrix to convert the corrupted labels to clean labels. However, current relabeling methods suffer from two limitations. First, they aim to find the true labels of the training samples, which means they can only deal with label noise. Second, they employ some additional structures to correct the labels, which are dependent of the specific model structures. For example, Goldberger et al. (Goldberger & Ben-Reuven, 2017) added an additional softmax layer to present the noise transition matrix and CleanNet (Lee et al., 2018) used the auto-encoder to update the corrupted labels. In contrast, we aim to develop a relabeling function based on influence analysis to change the labels of harmful samples towards better model performance. We do not require output labels to be one-hot vector since our objective is not to find the truth labels of training samples. Besides, we extend our approach to RDIA-LS to effectively combat noisy samples for training DNNs, which outperforms the existing data relabeling approaches (Goldberger & Ben-Reuven, 2017; Jiang et al., 2018).
DUTI (Zhang et al., 2018) is an effective data relabeling approach which could debug and correct the wrong labels in the training set. It uses a bi-level optimization scheme to recommend the most influential training samples for cleaning and suggest the possibly cleaned labels. The proposed relabeling function in DUTI is different from our approach. Specifically, the relabeling function in DUTI is trained by a bi-level optimization using the gradient of the validation loss while our proposed relabeling function has nothing to do with gradient, validation loss. | 1. What is the focus of the paper in terms of classification tasks?
2. What is the proposed solution in the paper, and how does it compare to previous approaches?
3. What are the limitations of the proposed method, particularly regarding its applicability in practice?
4. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents a relabeling scheme for binary and multiclass classification tasks in which harmful training examples identified by influence functions are relabeled to improve performance on a hold-out set. The authors formally prove that the relabeling scheme determines a reduction in loss wrt down-weighting or discarding examples, and report experimental comparisons confirming the advantage of the proposed strategy.
Review
The manuscript is well-written and clear. The proposed solution is simple and clean with a sound theoretical justification.
The practical utility of the approach is unfortunately limited given the cost of estimating influence functions on non-linear models. This is a well-known problem, especially when applying influence function to deep architectures. The authors address the problem by proposing to replace influence functions with training loss while retaining their relabeling scheme. I wonder how this simple strategy works in comparison to their RDIA approach (the appendix does not report comparisons in terms of accuracy), and under which conditions it could fail (e.g. distribution shift rather than noise). Otherwise one has the (probably false?) impression that all you need is training loss + relabeling. |
ICLR | Title
Resolving Training Biases via Influence-based Data Relabeling
Abstract
The performance of supervised learning methods easily suffers from the training bias issue caused by train-test distribution mismatch or label noise. Influence function is a technique that estimates the impacts of a training sample on the model’s predictions. Recent studies on data resampling have employed influence functions to identify harmful training samples that will degrade model’s test performance. They have shown that discarding or downweighting the identified harmful training samples is an effective way to resolve training biases. In this work, we move one step forward and propose an influence-based relabeling framework named RDIA for reusing harmful training samples toward better model performance. To achieve this, we use influence functions to estimate how relabeling a training sample would affect model’s test performance and further develop a novel relabeling function R. We theoretically prove that applying R to relabel harmful training samples allows the model to achieve lower test loss than simply discarding them for any classification tasks using cross-entropy loss. Extensive experiments on ten real-world datasets demonstrate RDIA outperforms the state-of-the-art data resampling methods and improves model’s robustness against label noise.
1 INTRODUCTION
Training data plays an inevitably important role in delivering the model’s final performance. It has been well recognized that the training bias issue will compromise model performance to a large extent (Arpit et al., 2017). Specifically, there are two major scenarios where training biases show up. The first and most common scenario is that training samples involve corrupted labels that could be originated at possibly every step of the data lifecycle (Anderson & McGrew, 2017; Dolatshah et al., 2018; Pei et al., 2020; Yu & Qin, 2020). The second scenario is that the training and test sets are sampled from the respective distributions Ptrain(x, y) and Ptest(x, y), but Ptrain is different from Ptest (Guo et al., 2020; He & Garcia, 2009; Zou et al., 2019). Both corrupted labels and distribution mismatch will hurt the generalization ability of a trained model (Fang et al., 2020; Zhang et al., 2017; Chen et al., 2021). We generally refer to training samples with corrupted labels or those inducing distribution mismatch as harmful samples.
Data resampling is a widely used strategy to deal with harmful training samples. Existing resampling approaches (Chawla et al., 2002; Mahmoody et al., 2016; Malach & Shalev-Shwartz, 2017; Ren et al., 2018) propose to assign different weights to training samples, which aim to mitigate the negative impacts of harmful samples on model’s generalization ability. Among them, most resampling approaches (Arazo et al., 2019; Han et al., 2018; Li et al., 2020; Wang et al., 2020a) rely on training loss to identify harmful samples from the whole training set. They follow the insight that the samples with higher training losses are very likely to have corrupted labels, and hence it is often beneficial to downweight them during the process of model training. However, such loss-based resampling methods have two limitations. First, they are only able to deal with the training biases caused by training samples with corrupted labels (aka noisy samples). Second, the small-loss trick typically holds true for deep models but not for any predictive models (Zhang et al., 2017). To address the limitations, one recent work (Wang et al., 2020b) proposes a new resampling scheme based on influence functions (Cook & Weisberg, 1980). The idea is to estimate the influence of each training sample on model’s predictions over the test set. Any training samples that would cause an
increase in the test loss are considered as harmful and will be downweighted afterwards. It is worth mentioning that the influence functions have been proved to deal with two forms of training biases effectively, and it is agnostic to a specific model or data type (Koh & Liang, 2017; Koh et al., 2019).
Inspired by the success of influence-based data resampling, in this paper, we would like to ask the following question: what would happen if we relabel harmful training data based on influence analysis results? Our motivations on performing data relabeling via influence analysis are twofold. (i) Relabeling noisy samples is able to prevent the model from memorizing the corrupted labels. (ii) Relabeling clean but biased samples is helpful to improve model’s robustness to harmful samples. Despite the potential benefits of data relabeling, it is still challenging to develop an influence-based relabeling approach that has a theoretical guarantee on the model’s performance improvement after training with relabeled data.
To answer the question, we first follow (Koh et al., 2019) to measure the influence of each training sample on model’s predictions and identify the harmful training samples which would cause an increase in the test loss. Next, we investigate whether relabeling the identified harmful samples rather than discarding them can improve the test performance. To achieve this, we start from binary classification tasks where relabeling a training sample is to convert its binary label from y to 1− y. We theoretically prove that relabeling harmful training data via influence analysis can achieve lower test loss than simply discarding them for binary classification. Furthermore, we design a novel relabeling function R for multi-class classification tasks and prove that the advantage of relabeling the identified harmful samples using R in reducing model’s test loss still holds. Following the influence-based resampling approaches (Wang et al., 2018; Ting & Brochu, 2018; Ren et al., 2020; Wang et al., 2020b), we only use the test loss for theoretical analysis and empirically calculate influence function with a small but unbiased validation set by assuming the validation set is sampled from the same distribution as the test set. In this way, using the validation loss to calculate the influence function is an unbiased estimation of the true influence function. Otherwise, the problem may lie in the category of transfer learning which is beyond the scope of this work.
To summarize, this work makes the following contributions. First, we propose to combine influence functions with data relabeling for reducing training biases and we develop an end-to-end influencebased relabeling framework named RDIA that reuses harmful training samples toward better model performance. Second, we design a novel relabeling function R and theoretically prove that applying R over harmful training samples identified by influence functions is able to achieve lower test loss for any classification tasks using cross-entropy loss function. Third, we conduct extensive experiments on real datasets in different domains. The results demonstrate that (i) RDIA is effective in reusing harmful training samples towards higher model performance, surpassing the existing influence-based resampling approaches, and (ii) RDIA improves model’s robustness to label noise, outperforming the current resampling methods by large margins.
2 BACKGROUND
Let D = {(xi, yi) ∈ X × Y | 1 ≤ i ≤ N} be the training set which are sampled from Ptrain(x, y). Let zi = (xi, yi) where xi ∈ Rd and yi ∈ RK . Let ϕ(x, θ) be a model’s prediction for x, where θ ∈ Rp is the parameter set of the model. We denote the loss of sample zi by l(zi, θ) = L(yi, ϕ(xi, θ)) and use li(θ) to represent l(zi, θ). We consider the standard empirical risk minimization (ERM) as the optimization objective. Formally, the empirical risk over D is defined as: L(D; θ) = 1N ∑N i=1 li(θ). Since our relabeling function is dependent to the loss function, we focus on the most effective and versatile loss, i.e., Cross Entropy loss for any classification tasks.
Influence functions. Influence functions, stemming from Robust Statistics (Huber, 2004), have provided an efficient way to estimate how a small perturbation of a training sample would change the model’s predictions (Koh & Liang, 2017; Koh et al., 2019; Yu et al., 2020). Let θ̂ = arg minθ 1 N ∑N n=1 ln(θ) be the optimal model parameters on convergence. When upweighting a training sample zi on its loss term by an infinitesimal step i, we obtain the new optimal parameters θ̂ i on convergence as : θ̂ i = arg min
θ
1 N ∑N n=1 ln(θi)+ ili(θ). Based on influence functions (Cook
& Weisberg, 1980; Koh & Liang, 2017), we have the following closed-form expression to estimate
the change in model parameters when upweighting zi by i:
ψθ(zi) , dθ̂ i d i | i=0= −H −1 θ̂ Oθli(θ̂), (1)
whereHθ̂ , 1 N ∑N n=1 O 2 θln(θ̂) is the Hessian matrix and O 2 θln(θ) is the second derivative of the loss at training point zn with respect to θ. Using the chain rule, we can estimate the change of model’s prediction at a test data zcj sampled from the given test distribution Ptest (Koh & Liang, 2017) :
Φθ(zi, z c j ) ,
dlj(θ̂ i)
d i | i=0= −Oθlj(θ̂)H −1 θ̂ Oθli(θ̂). (2)
At a fine-grained level, we can measure the influence of perturbing training sample zi from (xi, yi) to (xi, yi + δ). Let ziδ = (xi, yi + δ) and the new loss li(ziδ, θ) = L(yi + δ, ϕ(xi, θ)). According to (Koh & Liang, 2017), the optimal parameters θ̂ iδi after performing perturbation on zi is θ̂ iδi = arg min
θ
1 N ∑N n=1 ln(θ) + ili(ziδ, θ) − ili(θ). This allows us to estimate the change in model
parameters after the fine-grained data perturbation using influence functions as:
dθ̂ iδi d i | i=0 = ψθ(ziδ)− ψθ(zi)
= −H−1 θ̂
(Oθli(ziδ, θ̂)− Oθli(θ̂)). (3)
Further, the influence of perturbing zi by ziδ on model’s prediction at test sample zcj is the following:
ηθδ(zi, z c j ) ,
dlj(θ̂ iδi)
d i | i=0
= −Oθlj(θ̂)H−1θ̂ (Oθli(ziδ, θ̂)− Oθli(θ̂)). (4)
It is important to notice that Eq. (4) holds for arbitrary δ when i is approaching 0. This provides the feasibility of measuring how relabeling a training samples could influence the model’s predictions.
Influence-based resampling approaches. Previous researches (Koh & Liang, 2017; Wang et al., 2020b) have shown that influence functions have strong ability to identify harmful samples from the whole training set, which is agnostic to the specific model or data structure. Inspired by this, many influence-based resampling approaches (Ting & Brochu, 2018; Wang et al., 2018; 2020b) proposed to discard or downweight the identified harmful samples to reduce the test loss. However, different from previous works which focus on estimating the influence of each training sample on the test performance using Eq. (1)-(2), we perform the fine-grained perturbation on a training sample’s label and evaluate its influence using Eq. (3)-(4). Further, our work tries to build an end-to-end influence-based relabeling framework to reuse the harmful samples with a theoretical guarantee on the final model performance for any classification tasks. To be specific, we demonstrate that harmful training instances after being relabeled properly do make contributions to improve the final model performance, which provides a novel viewpoint on handling biased training data.
3 METHODOLOGY
Assume we have Q = {(xcj , ycj) ∈ X × Y | 1 ≤ j ≤ M} sampled from the test distribution Ptest and our objective is to minimize the test risk L(Q; θ) = 1M ∑M j=1 l c j(θ). Due to the harmful training samples in D, the optimal θ̂ which minimizes the empirical risk over training set D may not be the best risk minimizer over Q. To solve this issue, we propose a novel data relabeling framework named RDIA which aims to identify and reuse harmful training instances towards better model performance. We design a relabeling function R that allows the model to achieve lower test risk after being trained with the relabeled harmful instances for any classification tasks. In what follows, we first give an overview of the RDIA framework. Then we describe the details of the major steps in RDIA and provide theoretical analysis on how the relabeled harmful samples are useful to further reduce the test risk. The algorithm of RDIA could be found in Appendix A
3.1 OVERVIEW OF RDIA
Figure 1 provides an overview of RDIA, which consists of four major steps: Model training, Harmful samples identification, Relabeling harmful samples via influence analysis and Model retraining.
Step I: Model training is to train a model based on the training set D until convergence and get the model parameters θ̂. Step II: Harmful samples identification is to compute the influence of perturbing the loss term of each training sample zi ∈ D on test risk using Eq. (2) and then use it to identify the harmful training samples from D. We denote the set of identified harmful training samples as D− and the set of remaining training instances as D+. The details of this step are provided in Section 3.2. Step III: Relabeling harmful samples via influence analysis is to apply the relabeling function to modify the label of each identified harmful training sample in D− and obtain the set of relabeled harmful training samples denoted as D′−. We introduce our relabeling function R and theoretically prove that updating the model’s parameters with new training set D̂ = D′− ∪D+ can achieve lower test risk over Q than simply discarding or downweighting D− in Section 3.3. Step IV: Model retraining is to retrain the model using D̂ till convergence to get the final optimal parameters θ̂ R.
3.2 HARMFUL SAMPLES IDENTIFICATION
In the second step, we compute D− ⊆ D which contains harmful training samples from the original training setD. Intuitively, a training sample is harmful to the model performance if removing it from the training set would reduce the test risk overQ. Based on influence functions, we can measure one sample’s influence on test risk without prohibitive leave-one-out training. According to Eq. (1)-(2), if we add a small perturbation i on the loss term of zi to change its weight, the change of test loss at a test sample zcj ∈ Q can be estimated as follows:
l(zcj , θ̂ i)− l(z c j , θ̂) ≈ i × Φθ(zi, zcj ), (5)
where Φθ(·, ·) is computed by Eq. (2). We then estimate the influence of perturbing zi on the whole test risk as follows:
l(Q, θ̂ i)− l(Q, θ̂) ≈ i × M∑ j=1 Φθ(zi, z c j ). (6)
Henceforth, we denote by Φθ(zi) = ∑M j=1 Φθ(zi, z c j ) the influence of perturbing the loss term of zi on the test risk over Q. It is worth mentioning that given i ∈ [− 1N , 0), Eq. (6) computes the influence of downweighting or discarding the training sample zi. We denote D− = {zi ∈ D | Φθ(zi) > 0} as harmful training samples. Similar to (Wang et al., 2020b), we assume that each training sample influences the test risk independently. We derive the Lemma 1.
Lemma 1. Discarding or downweighting the training samples in D− = {zi ∈ D | Φθ(zi) > 0} from D could lead to a model with lower test risk over Q:
L(Q, θ̂ )− L(Q, θ̂) ≈ − 1
N ∑ zi∈D− Φθ(zi) ≤ 0, (7)
where θ̂ denotes the optimal model parameters obtained by updating the model’s parameters with discarding or downweighting samples in D−.
Lemma 1 explains why influence-based resampling approaches have strong ability to resolve training biases and the proof of Lemma 1 is provided in Appendix B. In practice, to further tolerate the estimation error in Φθ(zi) which may result in the wrong identification of harmful training samples, we select D− = {zi ∈ D | Φθ(zi) > α} where the hyperparameter α controls the proportion of harmful samples to be relabeled eventually. We conduct the experiment to show the effects of α and the validation set in Section 5.3.
3.3 RELABELING HARMFUL SAMPLES VIA INFLUENCE ANALYSIS
In the third step, we propose a relabeling functionR and ensure the test risk would be reduced after training with the relabeled harmful samples D′−. To achieve this, we start from a special case (i.e., binary classification) and then extend to any classification tasks.
Relabeling harmful instances on binary classification. We start with binary classification where the relabeling function R is straightforward. Since the label set Y is {0, 1}, we have R(z) = 1− y for any z = (x, y) ∈ D. Recall that ϕ(xi, θ) denotes the model output for xi and the training loss of zi = (xi, yi) ∈ D is: li(θ) = −yi log(ϕ(xi, θ))− (1− yi) log(1− ϕ(xi, θ)) To compute the influence of relabeling a training sample zi in D, we first consider the case that yi = 1 and R(zi) = 0. The loss li(θ) at zi is changed from − log(ϕ(xi, θ)) to − log(1− ϕ(xi, θ)). Letting ziR = (xi, 1− yi) and w(zi, θ) = Oθli(ziR, θ)− Oθli(θ), we have:
w(zi, θ) = −Oθ log(1− ϕ(xi, θ)) + Oθ log(ϕ(xi, θ)) = −Oθli(θ)
1− ϕ(xi, θ) . (8)
According to Eq. (2),(4) and (8), the influence of relabeling zi on model’s prediction at test sample zcj is:
ηθR(zi, z c j ) = −Oθlj(θ̂)H−1θ̂ w(zi, θ̂) = −Oθlj(θ̂)H −1 θ̂
( − Oθli(θ̂)
1− ϕ(xi, θ̂) ) = −Φθ(zi, zcj ) 1− ϕ(xi, θ̂) . (9)
Similarly, when the label yi in zi is 0 and R(zi) = 1, we can derive the influence of relabeling zi at zcj as ηθR(zi, z c j ) = −Φθ(zi,zcj ) ϕ(xi,θ̂)
. Let θ̂ iRi denote the optimal parameters after relabeling zi. Similar to Eq. (6), we could extend the influence of relabeling zi at zcj to the whole test risk over Q as:
l(Q, θ̂ iRi)− l(Q, θ̂) ≈ i × M∑ j=1 ηθR(zi, z c j ). (10)
According to Eq. (9), the influence of relabeling training samples ηθR(zi, zcj ) is related to the influence of perturbing the loss term of training samples, i.e., Φθ(zi, zcj ). In this way, the change of test risk by relabeling zi (Eq. (10)) and that by perturbing zi (Eq. (6)) are interrelated. Then we derive the Theorem 1 and the proof can be found in Appendix B.
Theorem 1. In binary classification, let σ be the infimum of ϕ(xi,θ̂) 1−ϕ(xi,θ̂) and 1−ϕ(xi,θ̂) ϕ(xi,θ̂) and D− = {zi ∈ D | Φθ(zi) > 0}. Relabeling the samples in D− can achieve lower test risk than discarding or downweighting them from D, because the following inequality holds.
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0. (11)
Theorem 1 shows that relabeling the samples in D− could achieve lower test risk than simply discarding or downweighting D− from the training set for binary classification tasks. We then provide some intuitions on the benefits of relabeling harmful training samples. In the context of binary classification, if a training sample z in D− is noisy, our relabeling method corrects the label noise and improve training data quality; otherwise, z is very likely to be biased due to its negative impact on the test risk, and relabeling it might improve model’s robustness.
Relabeling harmful instances on any classification tasks. We now introduce a relabeling function R that can be used for any classification tasks. For a classification problem with K class labels (K ≥ 2), we represent each label y as a K-dimensional one-hot vector. The CE loss at zi is: li(θ) = − ∑K k=1 yi,k log(ϕk(xi, θ)). Intuitively, the proposed relabeling function R should satisfy the following principles:
• Consistency: R should produce a K-dimensional label vector: R(xi, yi) = y′i ∈ [0, 1]K . • Effectiveness: applying R over harmful training samples D− should guarantee the resul-
tant test risk is no larger than the one achieved by simply discarding them.
For the consistency principle, we require the new label y′i to be K-dimensional where y ′ ik describes the likelihood that xi takes the k-th class label, k ∈ [1,K]. Here we do not require ∑K k=1 y ′ ik to be one, because we focus on leveraging the identified harmful training samples towards better model performance instead of finding their truth labels.
Consider a training sample zi = (xi, yi) belonging to the m-th class (m ∈ [1,K]), i.e., yi,m = 1. Let R(xi, yi) = y′i, we propose the following relabeling function R that fulfills the above two principles:
y′i,k =
{ 0, if k = m
logϕk K−1 √ 1− ϕm, otherwise , (12)
where ϕ(xi, θ̂) = (ϕ1, · · · , ϕK) is the probability distribution over K classes produced by the model with parameters θ̂, i.e., ϕi ∈ [0, 1] and ∑K i=1 ϕi = 1. It is easy to check that our proposed relabeling function R in Eq. (12) satisfies the first principle. Interestingly, we can verify that for K = 2, we haveR(zi) = 1− yi. We further prove the effectiveness ofR using the Lemma 2. Lemma 2. When applying the relabeling functionR in Eq. (12) over a training sample zi ∈ D with a class label m, the CE loss li(θ) at zi is changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
It is interesting to verify the change in loss li(θ) acts as an extension of the binary classification. Similar to Theorem 1, we can drive the following theorem using Eq. (9)-(10).
Theorem 2. In multi-class classification, let ϕy(xi, θ̂) denote the probability that zi is classified as its truth class label by the model with the optimal parameters θ̂ on D, and σ be the infimum of ϕy(xi,θ̂) 1−ϕy(xi,θ̂) . Relabeling the samples in D− = {zi ∈ D | Φθ(zi) > 0} with R leads to a test risk lower than the one achieved by discarding or downweighting D−. Formally, we have:
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0. (13)
Theorem 2 shows that using our proposed relabeling R can further reduce the test risk than simply discarding or downweighting D− from the training set for any classification tasks. The detailed proofs of Lemma 2 and Theorem 2 are provided in Appendix B.
4 DISCUSSIONS
In this section, we provide numerical analysis on the superior performance of RDIA against other influence-based resampling approaches (Wang et al., 2018; 2020b). Then we discuss the extension of RDIA in exploiting training loss information. Numerical analysis. Consider a training point zi ∈ D− belonging to classm, whereD− is specified in Section 3.2. According to Eq. (13), if we use R to assign zi with a new label y′i = R(zi) instead of discarding or downweighting zi, the difference in the test risk over Q can be computed as:
g(zi) = li(Q, θ̂ iRi)− li(Q, θ̂ i) ≈ − 1 N × ϕm(xi, θ̂) 1− ϕm(xi, θ̂) Φθ(zi). (14)
zi ∈ D− means the Φθ(zi) in Eq. (14) is positive, and hence we have g(zi) < 0. If the model’s prediction for zi with the optimal parameters θ̂ is correct, ϕm(xi, θ̂) is the largest component in the vector ϕ(xi, θ̂). We consider zi as a more harmful sample because it has negative influence on the test loss yet the model has learnt some features from zi that connects xi to class m. In practice, zi is very likely to be a noisy or biased training sample. Interestingly, from Eq. (14), we can see that a small increase in ϕm(xi, θ̂) will lead to a rapid increase in g(zi). This indicates relabeling such more harmful training samples leads to significant performance gain. Extension of RDIA. In practice, due to the complexity of calculating the influence functions, identifying harmful samples via influence analysis could incur high computational cost, especially when
training complex models like deep neural networks. To address the problems, we further extend RDIA by using training loss to identify harmful samples for deep models, and we dub this extension as RDIA-LS. We empirically show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning, which spotlights the great scalability of our approach. The details of RDIA-LS are provided in Appendix F.
5 EXPERIMENTS
In this section, we conduct experiments to evaluate the effectiveness and robustness of our RDIA. We also perform the ablation study to show how hyperparameter α and the size of validation set affect the performance of RDIA. The visualization of identified harmful samples and comparison with other loss-based approaches are provided in Appendix D and Appendix G.
5.1 EXPERIMENTAL SETTINGS
Datasets. To verify the effectiveness of RDIA, we perform extensive experiments on ten public data sets from different domains, including NLP, CV, CTR, etc. Since all the datasets are clean, we build a noise transition matrix P = {Pij}K×K to verify the robustness of our proposed approaches for combating noisy labels, where K denotes the class number and Pij denotes the probability of a clean label i being flipped to a noisy label j. In our experiment, we use the noise ratio τ to determine the rate of how many labels are manually corrupted and each clean label has the same probability of being flipped to other classes, i.e., Pij = τK−1 . More details about the statistics of the datasets and Tr-Va-Te divisions are provided in Appendix C.
Comparison methods. We compared our proposed relabeling method RDIA with the following baselines, all of which are agnostic of specific model or data structure. (1) ERM: it means training a model with all the training data with the cross-entropy loss. (2) Random: it is a basic relabeling method that randomly selects and changes the label of training samples. (3) OptLR (Ting & Brochu, 2018): it is a weighted sampling method which assigns each training sample with a weight proportional to its impact on the change in model’s parameters ψθ. Specifically, the weight of zi is max{α,min{1, λψθ(zi)}}. We set α and λ to be 1/max{ψθ(zi)} and 1/max{Φθ(zi)}, respectively. (4) Dropout (Wang et al., 2018): it is an unweighted subsampling method which simply discards D− from the training set, i.e., removing all training data with negative influence on the test loss. (5) UIDS (Wang et al., 2020b): it it is an unweighted subsampling method which uses Linear sampling method or Sigmoid sampling method to resample the training data based on influence functions Φθ(zi). It is the best-performing method among all the existing influence-based methods.
We implemented all the comparison methods by using their published source codes in Pytorch and ran all the experiments on a server with 2 Intel Xeon 1.7GHz CPUs, 128 GB of RAM and a single NVIDIA 2080 Ti GPU. All the baselines are tuned with clean validation data for best model performance. To measure the performance of all the approaches, we followed (Wang et al., 2020b) and used the test loss as the metric since we aim to optimize the test loss via influence analysis.
Implementation details. For each of the ten datasets, we adopted logistic regression (convex optimization) as the binary classifier (for MNIST and CIFAR10, we randomly choose two classes to perform binary classification). As for multi-class classification, we implemented two deep models (non-convex optimization), LeNet (2 convolutional layers and 1 fully connected layers) and a CNN with 6 convolutional layers followed by 2 fully connected layers used in (Wang et al., 2019) on MNIST and CIFAR10. The hyperparameter α is also tuned in [0, 0.001, 0.002, ...,0.01] with the clean validation set for best performance. More detailed settings are provided in Appendix C.
5.2 EXPERIMENTAL RESULTS
Effectiveness of RDIA. To verify the effectiveness of RDIA , we conduct experiments on 10 clean datasets with three different models. The experiments are repeated 5 times and the averaged test loss with standard deviation results are reported in Table 1 and Table 2.We have the following important observations.
First, our proposed RDIA yields the lowest test loss over 9 out of 10 datasets using logistic regression. It outperforms ERM on all the datasets, which indicates the effectiveness of relabeling
training samples via influence functions to resolve training biases. Furthermore, RDIA outperforms the state-of-the-art resampling method UIDS on all the datasets except Avazu, which indicates the effectiveness of reusing harmful training samples via relabeling towards higher model performance.
Second, when training deep models, RDIA achieves the best test loss on MNIST+LeNet, MNIST+CNN, and CIFAR10+CNN, where it outperforms UIDS by a large margin. We observe LeNet performs much worse than CNN on CIFAR10 using the original training set (i.e., the results of ERM) due to its simple architecture. Note that the poor classification results for clean and unbiased training data would interfere the identification of true harmful training samples. Hence, RDIA performs similarly to Random which introduces random noises into the training set and the performance suffers. But we want to emphasize that when training a more suitable model (e.g., CNN) on CIFAR10, RDIA is more effective to improve model’s performance.
Third, Random performs worse than ERM on all the cases except on Adult. This indicates that randomly relabeling harmful training samples may easily inject noisy data that hurt model’s performance significantly. In contrast, our proposed relabeling function is effective to assign appropriate labels to harmful training samples that benefit the test loss.
Robustness to label noises. In order to investigate the robustness of RDIA to noise labels, we set the noise ratio τ from 0 to 0.8 to manually corrupt the labels in the four datasets from different domains, while the results on the other datasets have similar trends. Figure 2 reports the average test loss of all the influence-based approaches on four noisy datasets with different noise ratios. First, thanks to the high accuracy of estimating the influence functions on logistic regression, all influence-based approaches consistently outperform ERM, which indicates the effectiveness of using influence functions to identify noisy samples. Figure 2(a), 2(b) and 2(c) show that RDIA performs significantly better than the other influence-based approaches. As the noise ratio becomes larger, the test loss of all the other approaches increases while the test loss reported by RDIA is generally unchanged. This verifies the robustness of RDIAto high noise ratios. We surprisingly find that RDIA at 0.8 noise ratio achieves lower test loss than ERM at zero noise ratio. The reason might be that RDIA could leverage all the training samples and fix noisy labels properly to boost the per-
formance. Second, Figure 2(d) shows that when combating noisy labels for deep models, RDIA still suffers from the noisy labels like other baselines because the estimation of influence functions with deep models is not accurate enough to filter out all noisy labels. However, RDIA could still relabel the most negative samples to reduce the test loss.
5.3 ABLATION STUDY
Finally, we investigate the effect of different values of hyperparameters α and size of validation set on the performance of RDIA using MNIST with logistic regression. Hyparameter α. As discussed in Section 3.3, by varying α, we can derive the percentage of relabeled training data against the complete training set in RDIA. Table 3 provides the results of how many samples are relabeled and how test loss is changed with different values of α under different noise ratios. First, when noise ratio equals to 0, there are few biased samples in the training set. In this case, simply relabeling all the identified harmful samples will hurt the performance while using relatively larger α could report lower test loss. Second, when noise ratio is 0.8, RDIA achieves better performance with smaller α. This is reasonable since most of training samples involve label noises and increasing α facilitates the relabeling of noisy samples. Size of the validation set. As discussed in Section 3.3, we use the validation set instead of the test set to estimate the influence of each training sample. Table 4 shows the results of how the number of validation samples affects the model performance. We conduct the experiments under 40% noise rates and find the optimal hyperparameter α ∈ [0.0002, 0.01] to get the best results of RDIA. We have the following observations. 1) Using only 100 validation samples with RDIA achieves 35% lower test loss than ERM. 2) As the number of validation samples increases, RDIA significantly outperforms ERM, achieving up to 90% relative lower in test loss. The reason is that, as the number of validation sets increases, the validation set can gradually reflect the true distribution of test data. In this way, the estimated influence functions are accurate enough to filter out most harmful training samples for the test set. 3) RDIA consistently outperforms UIDS with different sizes of validation set, which empirically shows the effectiveness of our relabeling functionR.
6 CONCLUSION
In this paper, we propose to perform data relabeling based on influence functions to resolve the training bias issue. We develop a novel relabeling framework named RDIA, which reuses the information of harmful training samples identified by influence analysis towards higher model performance. We theoretically prove that RDIA can further reduce the test loss than simply discarding harmful training samples on any classification tasks using the cross-entropy loss function. Extensive experiments on real datasets verify the effectiveness of RDIA in enhancing model’s robustness and final performance, compared with various resampling and relabeling techniques.
Reproducibility: We clarify the assumptions in Section 2 and provide the complete proofs of Lemmas, Theorems in Appendix B. The statistics of datasets, the data processing, and the details of the experimental settings are described in Appendix C. Our code could be found in the https://github.com/Viperccc/RDIA.
ACKNOWLEDGMENT
This work is supported by Shanghai Municipal Science and Technology Major Project (2021SHZDZX0102), the Tencent Wechat Rhino-Bird Focused Research Program, and SJTU Global Strategic Partnership Fund (2021 SJTU-HKUST). Yanyan Shen is the corresponding author of this paper.
Appendix
In this appendix, we first provide the algorithm of RDIA (Appendix A) and the complete proofs of the Lemmas and Theorems (Appendix B) in the main text. Then we give the details of the experimental settings (Appendix C), the extensive analysis of our approach (Appendix D), and the visualization of identified harmful samples (Appendix E). We then describe RDIA-LS, an extension of RDIA, to spotlight the scalability of our approach RDIA (Appendix F) and provide empirical results to show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning (Appendix G). Finally, we provide additional discussions about the existing data relabeling approaches (Appendix H)
A RDIA ALGORITHM
Algorithm 1: RDIA Input: Training model θ, biased training set D = {(xi, yi)}Ni=1, learning rate β, sample
selection ratio α such that 0 ≤ α ≤ 1, small and unbaised set Q = {(xcj , ycj)}Mi=1 1 Train the model θ with D until convergence to get θ̂; 2 Initialize D−, D+ = ∅ 3 for i ∈ [1, . . . , N ] do 4 Calculate the influence of the training sample zi = (xi, yi) on Q using Eq. (6): 5 if Φθ(zi) > α then 6 Relabel the identified harmful training samples with z′i ← R(zi); 7 D− ← D− ∪ {z′i} 8 else if Φθ(zi) < 0 then 9 D+ ← D+ ∪ {zi}
10 end 11 end 12 Obtain the new training set D̂ ← D− ∪D+ 13 Retrain the model with D̂ till convergence to get the final model parameters θ̂ R
B PROOFS FOR LEMMAS AND THEOREMS
B.1 PROOF OF LEMMA 1
Assume the perturbation i on zi is infinitesimal and the influence of each training sample on the test risk is independent.
Lemma 1. Discarding or downweighting the training samples in D− = {zi ∈ D | Φθ(zi) > 0} from D could lead to a model with lower test risk over Q:
L(Q, θ̂ )− L(Q, θ̂) ≈ − 1
N ∑ zi∈D− Φθ(zi) ≤ 0,
where θ̂ denotes the optimal model parameters obtained by updating the model’s parameters with discarding or downweighting samples in D−.
Proof. Recall that θ̂ i = arg min θ 1 N
∑N n=1 ln(θi)+ ili(θ). In this way, downweighting the training
sample zi inD− means setting i ∈ [− 1N , 0) (Noticed that i = − 1 N means discarding training sample zi). For convenience of analysis, we set all i equal to− 1N and have Φθ(zi) , ∑M j=1 Φθ(zi, z c j ). According to Eq. (6), we can estimate how the test risk is changed by discarding or downweighting
zi ∈ D− as follows:
L(Q, θ̂ )− L(Q, θ̂) = ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ i)− l(z c j , θ̂)
≈ ∑
zi∈D−
i × M∑ i=1 Φθ(zi, z c j )
= − 1 N ∑ zi∈D− Φθ(zi) ≤ 0
B.2 PROOF OF THEOREM 1
Theorem 1. In binary classification, let σ be the infimum of ϕ(xi,θ̂) 1−ϕ(xi,θ̂) and 1−ϕ(xi,θ̂) ϕ(xi,θ̂) and D− = {zi ∈ D | Φθ(zi) > 0}. Relabeling the samples in D− can achieve lower test risk than discarding or downweighting them from D, because the following inequality holds.
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0.
Proof. Based on Eq. (9), we have:
ηθR(zi, z c j ) Φθ(zi, zcj ) + 1 = 1− ϕ(xi, θ̂) −ϕ(xi, θ̂) , if yi = 0
−ϕ(xi, θ̂) 1− ϕ(xi, θ̂) , if yi = 1
It is worth mentioning that θ̂ iRi = arg min θ 1 N
∑N n=1 ln(θ) + ili(ziR, θ) − ili(θ). In this way,
relabeling the training sample zi in D− means setting i = 1N .
Similar to the proof of Lemma 1, according to Eq. (6) and Eq. (10), we have:
L(Q, θ̂ R)− L(Q, θ̂ ) =L(Q, θ̂ R)− L(Q, θ̂) + L(Q, θ̂)− L(Q, θ̂ )
= ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ iRi)− l(z c j , θ̂)− (l(zcj , θ̂ i)− l(z c j , θ̂))
≤ ∑
zi∈D−
( M∑ j=1 1 N ηθR(zi, z c j ) + 1 N M∑ j=1 Φθ(zi, z c j ))
= 1
N ∑ zi∈D− M∑ j=1 ( ηθ(zi, z c j ) Φθ(zi, zcj ) + 1)Φθ(zi, z c j )
≤− σ N ∑ zi∈D− Φθ(zi) ≤ 0
B.3 PROOF OF LEMMA 2
Lemma 2. When applying the relabeling functionR in Eq. (12) over a training sample zi ∈ D with a class label m, the CE loss li(θ) at zi is changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
Proof. Recall that the model’s prediction at xi is ϕ(xi, θ) = (ϕ1, ϕ2, ..., ϕK) and our relabeling function is:
y′i,k =
{ 0, if k = m
logϕk K−1√1− ϕm, otherwise
Here we assume the training example zi belongs to class m which means that yim = 1 and the other components in the one-hot vector yi are 0. The prime CE loss is − log(ϕm(xi, θ)). If we use our relabeling function to change the label of xi, the loss at zi will be:
l̃(zi, θ) = − ∑ k 6=m logϕk K−1 √ 1− ϕm × log(ϕk)
= − ∑ k 6=m log( K−1 √ 1− ϕm) log(ϕk) × log(ϕk)
= − ∑ k 6=m log(1− ϕm) K − 1
= − log(1− ϕm)
In this way, if we use relabeling functionR to change the label of example zi, the loss function will be changed from − log(ϕm(xi, θ)) to − log(1− ϕm(xi, θ)).
B.4 PROOF OF THEOREM 2
Theorem 2. In multi-class classification, let ϕy(xi, θ̂) denote the probability that zi is classified as its truth class label by the model with the optimal parameters θ̂ on D, and σ be the infimum of ϕy(xi,θ̂) 1−ϕy(xi,θ̂) . Relabeling the samples in D− = {zi ∈ D | Φθ(zi) > 0} with R leads to a test risk lower than the one achieved by discarding or downweighting D−. Formally, we have:
L(Q, θ̂ R)− L(Q, θ̂ ) ≈ − σ
N ∑ zi∈D− Φθ(zi) ≤ 0.
Proof. According to Lemma 2, Eq. (8)and Eq. (10), we can estimate the change of test loss at a test sample zcj ∈ Q caused by relabeling as follows:
li(z c j , θ̂ iRi)− li(z c j , θ̂) ≈ i × ηθR(zi, zcj )
= − 1 N
1
1− ϕy(xi, θ̂) Φθ(zi, z
c j )
Further, we can derive the following:
L(Q, θ̂ R)− L(Q, θ̂ ) =L(Q, θ̂ R)− L(Q, θ̂) + L(Q, θ̂)− L(Q, θ̂ )
= ∑
zi∈D−
M∑ j=1 l(zcj , θ̂ iRi)− l(z c j , θ̂)− (l(zcj , θ̂ i)− l(z c j , θ̂))
≤ ∑
zi∈D−
( M∑ j=1 1 N ηθR(zi, z c j ) + 1 N M∑ j=1 Φθ(zi, z c j ))
= 1
N ∑ zi∈D− M∑ j=1 ( −1 1− ϕy(xi, θ̂) + 1)Φθ(zi, z c j )
≤− σ N ∑ zi∈D− Φθ(zi) ≤ 0
C EXPERIMENTAL SETTINGS
C.1 THE STATISTICS OF THE DATASETS
Table 5 shows the statistics of the datasets. We perform extensive experiments on public datasets from different domains to verify the effectiveness and robustness of our approach RDIA. All the datasets could be found in https://www.csie.ntu.edu.tw/cjlin/libsvmtools/datasets/.
C.2 TR-VA-TE DIVISIONS
We follow the Tr-Va-Te (Training/Validation/Test set divisions) setting in Wang et al. (2020b) to measure the generalization ability of our approach RDIA. Specifically, the influence of each training instance is estimated with the validation set using the validation loss and the model’s performance is tested by an additional out-of-sample test set which ensures we do not utilize any information of the test data.
When training logistic regression, we randomly pick up 30% samples from the training set as the validation set. For different influence-based approaches, the training/validation/test sets are kept the same for fair comparison. Both MNIST and CIFAR10 are 10-classes image classification datasets while logistic regression can only handle binary classification. On MNIST, we select the number 1 and 7 as positive and negative classes, respectively; On CIFAR10, we perform binary classification on cat and dog. For each image, we convert all the pixels into a flattened feature vector where each pixel is scaled by 1/255.
When training deep models, due to the high time complexity of estimating influence functions, we randomly exclude 100 samples (1%) from the test sets of MNIST and CIFAR10 as the respective validation sets, and the remaining data is used for testing.
C.3 IMPLEMENTATION DETAILS
We used the Newton-CG algorithm Martens (2010) to calculate the influence functions for the logistic regression model and applied Stochastic estimation Agarwal et al. (2017) for two deep models with 1000 clean data in the validation set. For logistic regression model, we select the regularization term C = 0.1 for fair comparison. We adopt the Adam optimizer with the learning rate of 0.001 to train the LeNet on MNIST. After calculating the influence functions and relabeling the identified harmful training samples using R, we reduce the learning rate to 10−5 and update the models until convergence. For CIFAR10, we use the SGD optimizer with the learning rate of 0.01 and the momentum of 0.9 to train the CNN. Then we change the learning rate to 0.001 and update the models based on the relabeled training set. Here we use different optimizers to train the models. This indicates RDIA is independent of the update strategy used for model training. The batch size is set to 64 in all the experiments and the hyperparameter α is tuned with the validation set for best performance.
D EXTENSIVE ANALYSIS OF RDIA
D.1 COMPLEXITY ANALYSIS
According to Koh & Liang (2017), the time complexity of calculating influence function for one training sample(i.e., Eq. 2) is O(NP ), where N and P stand for the sizes of training set and model’s parameter set, respectively. Note that the time complexity of relabeling one sample is O(N). Considering the complexity of calculating influence functions, the time cost of relabeling harmful samples is negligible which means our RDIA is as fast as any influence-based approaches.
D.2 RELATIONSHIP WITH PROPENSITY SCORE
Propensity score (Rosenbaum & Rubin, 1983; Bickel et al., 2009) is a well-studied technique to solve the distribution mismatch (also called covariate shift) problem where training and testing sets are sampled from two different distributions Ptrain(x, y) and Ptest(x, y), respectively. Its basic idea is to assign the propensity score to each training sample to make the test risk unbiased. Unlike the influence function calculated by measuring the change of test loss, propensity score is calculated directly by estimating the probability of each training sample belonging to the test distribution. If we could estimate the training and test distribution accurately, we could also use the propensity score to replace the influence function for identifying whether the training sample is harmful. We leave it for the future work.
E VISUALIZATION OF IDENTIFIED HARMFUL SAMPLES
We provide examples of harmful samples identified by influence functions to illustrate the effectiveness of influence analysis. We apply the logistic regression on MNIST (class 1 and 7) and CIFAR10 (class cat and dog). The influence functions are estimated by Newton-CG algorithm (Martens, 2010). We provide the three most harmful images which have the highest influence scores and share the same label with the test sample.
Figure 3(a) shows three identified harmful training images for each test image when there are no flipped labels in the training set. We can see that the identified harmful training samples are visually different from the original pictures, which disturbs the model’s prediction on the test image. That is, the presence of clean but harmful training images would damage the model’s performance.
Figure 3(b) shows the identified harmful training images when 50% labels of training data have been flipped. It is easy to see that the harmful images have corrupted labels, which confirms the effectiveness of applying influence analysis to locate noisy samples.
F RDIA-LS: A LOSS-BASED RELABELING APPROACH
F.1 LIMITATIONS OF RDIA
In the main paper, we have discussed a novel data relabeling framework RDIA via influence analysis. Based on the advantages of influence functions, RDIA is able to handle different types of training biases and is agnostic to a specific model or data type. However, the time complexity of estimating
Algorithm 2: RDIA-LS Input: Deep neural network θ, learning rate β, trainig set D, training epoch T , iteration N , sample
selection ratio ρ, underweight hyperparameter γ such that 0 ≤ γ ≤ 1. 1 for t ∈ [1, . . . , T ] do 2 Shuffle training set D; 3 for n ∈ [1, . . . , N ] do 4 Fetch n-th mini-batch D̄ from D; 5 Identify harmful samples using training loss: //Step I 6 D̄+ ← arg minD̄:|D̄|≥ρ|D̄| L(D̄, θ); 7 D̄− ← D̄ \ D̄+; 8 Relabel the identified harmful training samples: //Step II 9 D̄′− ←R(D̄−);
10 Obtain the loss as: LR = γL(D̄′−, θ) + (1− γ)L(D̄+, θ) 11 Update the model: θ ← θ − βOLR(D̄+, θ); //Step III 12 end 13 end
the influence of one training sample isO(NP ), whereN and P stand for the sizes of training set and model’s parameter set, respectively. This is relatively high for deep models which have thousands of parameters. Moreover, according to (Koh & Liang, 2017), the approximate estimation of influence functions on deep models may not be accurate and hence the second step of RDIA suffers from false positives and false negatives. When harmful samples account for the majority of the training set, e.g., high noise rates, it is difficult to filter most of harmful samples using the estimated influence.
F.2 RDIA-LS
To address the aforementioned limitations, we aim to extend RDIA to solve the specific problem. Here we focus on combating noisy labels with deep models since label noise is usually a primary root cause of training bias. We notice that training loss has been used to filter out training samples with corrupted labels in many previous works (Arpit et al., 2017; Han et al., 2018; Wei et al., 2020; Yu et al., 2019). It is worth mentioning that the noisy training samples identified by training loss are not equivalent to the harmful ones identified by influence functions because the latter are evaluated to have negative influence on the test performance. Nevertheless, since the selected high-loss training samples are very likely to involve corrupted labels, applying our relabeling function over them has the potential of correcting corrupted labels and benefiting the test performance. Besides, using training loss to identify harmful samples is more efficient as it avoids the estimation of influence functions. Hence, we propose to use training loss to identify noisy samples and develop a lossbased data relabeling approach named RDIA-LS, which can be viewed as an extension of RDIA for combating corrupted labels with deep models.
RDIA-LS consists of three steps: noisy samples identification, noisy samples relabeling and model updating. It shares the same last two steps with RDIA. The only difference between RDIA-LS and RDIA is that RDIA-LS uses training loss to identify noisy samples in each training epoch so that it does not need to train the model until convergence first. Specifically, given a mini-batch of training instances D̄ ⊆ D, RDIA-LS feeds forward all the samples in D̄ and then sorts them in an ascending order of their training losses. Following the prior works (He & Garcia, 2009), we regard the largeloss instances as noisy and the small-loss instances as clean. We use the rate of ρ to select the possibly clean training instances in D̄, i.e., D̄+ = arg minD̄:|D̄|≥ρ|D̄| L(D̄, θ). The remaining highloss training instances are treated as noisy samples, i.e., D̄− = D̄\D̄+. We follow (Han et al., 2020) to determine the value of the selection ratio ρ. After we have D̄−, we use our relabeling functionR to relabel the samples in D̄− and then update the model with D̄+ ∪ D̄′−. In our implementation, we simply modify the loss of the identified noisy samples based on Lemma 2 without performing actual relabeling. We use the hyperparameter γ ∈ [0, 1] to control the model tendency of learning from the clean instances and the relabeled noisy instances. The detailed procedure of RDIA-LS is provided in Algorithm 2.
We conduct the additional experiments in Appendix G to empirically show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning, which spotlights the great scalability of our approach RDIA.
G PERFORMANCE EVALUATION OF RDIA-LS
We now conduct the experiments to evaluate the effectiveness and efficiency of RDIA-LS using DNNs on MNIST, CIFAR10, CIFAR100 and Clothing1M. The first three datasets are clean and corrupted artificially. Clothing1M is a widely used real-world dataset with noisy labels (Patrini et al., 2017).
G.1 IMPLEMENTATION DETAILS
We apply the same network structures used in the main paper and use LeNet (2 convolutional layers and 1 fully connected layer) for MNIST, a CNN with 6 convolutional layers followed by 2 fully connected layers used in (Wang et al., 2019) for CIFAR10 and CIFAR100, and a 18-layer ResNet for Clothing1M. We follow the settings in (Han et al., 2018) for all the comparison methods. Specifically, for MNIST, CIFAR10 and CIFAR100, we use the Adam optimizer with the momentum of 0.9, initial learning rate of 0.001, and the batch size of 128. We run 200 epochs (T=200) in total and linearly decay the learning rate till zero from 80 to 200 epochs. As for Clothing1M, we use the Adam optimizer with the momentum 0.9 and set the batch size to be 64. We run 15 epochs in total and set learning rate to 8× 10−4, 5× 10−4 and 5× 10−5 for average five epochs. We set the ratio of small-loss instances as ρ = 1 −min{ tTk ∗ τ, τ} which is changed dynamically with the current training epoch t and Tk = 5 for Clothing1M and Tk = 10 for the other datasets. In this way, we can determine D− and D+ in each training epoch. If the noise ratio τ is not known in advance, we could use the method (Yu et al., 2018) to estimate τ . The hyperparameter γ is tuned in {0.05, 0.10, 0.15, · · · , 0.95} with the validation set for best performance. If there is no validation set, we could use training loss to select a clean subset from the training set as the validation set. Following loss-based approach (Han et al., 2020; 2018; Jiang et al., 2018), we use the test accuracy as the metric, i.e., (#correct predictions) / (#test instances).
G.2 COMPARISON METHODS
We compare our proposed RDIA-LS with the following baselines. S-model (Goldberger & BenReuven, 2017) and F-correction (Patrini et al., 2017) are the existing data relabeling approach which aims to estimate the noisy transition matrix to correct the noisy labels. The last three approaches are the state-of-the-art loss-based resampling approaches. (1) ERM: it trains one network with all the training data using cross-entropy loss. (2) S-model (Goldberger & Ben-Reuven, 2017): it uses an additional softmax layer to model the noise transition matrix to correct the model (3) Fcorrection (Patrini et al., 2017): it corrects the prediction by the noise transition matrix estimated by the other network. (4) Self-teaching (Jiang et al., 2018): it trains one network with only the selected small-loss instances D+. (5) Co-teaching (Han et al., 2018): it trains two networks simultaneously and improves self-teaching by updating the parameters of each network with the small-loss instancesD+ selected by the peer network. (6) SIGUA (Han et al., 2020): it trains one network with the selected small-loss instances D+ and high-loss instances D− via gradient descent and gradient ascent, respectively.
G.3 EXPERIMENTAL RESULTS
Comparison with the Baselines.
RDIA-LS is proposed to combat noisy labels for deep learning. In order to evaluate how RDIALS improves the robustness of deep models, we perform experiments on MNIST+LeNet, CIFAR10+CNN and CIFAR100+CNN with different noise ratios and the real-world noisy dataset Clothing1M+Resnet18. The average results of test accuracy are reported in Table 6, Table 7, Table 8 and Table 9 . We have the following observations. (1) RDIA-LS achieves the highest test accuracy in all the cases. When noise ratio is 0.2, the improvement of RDIA-LS is relatively small. This is reasonable as the performance gain of RDIA-LS obtained from utilizing noisy data is restricted due to the low noise ratio. When the noise ratio exceeds 0.4, RDIA-LS significantly outperforms the existing loss-based approaches, achieving up to 5% relative improvement in test accuracy. It indicates that RDIA-LS can still effectively reuse harmful training instances to improve model’s robustness under high noise ratios. (2) RDIA-LS consistently performs better than S-model, F-correction, and SIGUA, which implies that using R to relabel noisy training samples identified by training loss is more effective than modeling the noise transition matrix or performing gradient ascent with identified noisy training instances. (3) RDIA-LS outperforms all the baselines on the real-world noisy dataset Clothing1M, which demonstrates the effectiveness of applying RDIA-LS in practice.
Comparison with RDIA.
Table 10 reports the running time of harmful/noisy samples identification in RDIA and RDIA-LS. We exclude the results of LS on logistic regression since training loss can only be used to filter out noisy samples for training deep models. From the table, we can see that using influence function to identify harmful samples for logistic regression is efficient. However, when training deep models with millions of parameters, using training loss to filter out noisy samples is much more efficient.
RDIA-LS is an extension of RDIA to combat noisy samples with deep models. The aforementioned experimental results show that RDIA-LS is effective and efficient to handle training data with corrupted labels for deep learning. However, it is worth noticing that RDIA-LS relies on the small-loss trick that the samples which have the larger training loss may contain the corrupted labels. In this way, RDIA-LS is only suitable for the deep models against the corrupted labels and could fail in the situation where the small-loss trick does not hold while RDIA has no such constraint.
H ADDITIONAL DISCUSSION ON DATA RELABELING
Existing relabeling approaches (Goldberger & Ben-Reuven, 2017; Jiang et al., 2018; Lee et al., 2018) are proposed to combat noisy labels with DNNs. They focus on estimating the noise transition matrix to convert the corrupted labels to clean labels. However, current relabeling methods suffer from two limitations. First, they aim to find the true labels of the training samples, which means they can only deal with label noise. Second, they employ some additional structures to correct the labels, which are dependent of the specific model structures. For example, Goldberger et al. (Goldberger & Ben-Reuven, 2017) added an additional softmax layer to present the noise transition matrix and CleanNet (Lee et al., 2018) used the auto-encoder to update the corrupted labels. In contrast, we aim to develop a relabeling function based on influence analysis to change the labels of harmful samples towards better model performance. We do not require output labels to be one-hot vector since our objective is not to find the truth labels of training samples. Besides, we extend our approach to RDIA-LS to effectively combat noisy samples for training DNNs, which outperforms the existing data relabeling approaches (Goldberger & Ben-Reuven, 2017; Jiang et al., 2018).
DUTI (Zhang et al., 2018) is an effective data relabeling approach which could debug and correct the wrong labels in the training set. It uses a bi-level optimization scheme to recommend the most influential training samples for cleaning and suggest the possibly cleaned labels. The proposed relabeling function in DUTI is different from our approach. Specifically, the relabeling function in DUTI is trained by a bi-level optimization using the gradient of the validation loss while our proposed relabeling function has nothing to do with gradient, validation loss. | 1. What is the focus and contribution of the paper regarding label noise and generalization?
2. What are the strengths of the proposed approach, particularly in its application and experimental results?
3. Do you have any concerns about the technical novelty of the paper compared to prior works?
4. How does the reviewer assess the clarity, quality, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper uses influence functions from robust statistics to perform two tasks in order to mitigate label noise and generalize better: (i) Identify harmful instances via IF; (ii) Relabel the harmful instances to have better generalization.
Review
The paper uses influence functions from robust statistics to first identify harmful instances and then relabel them based on a novel relabeling function using influence approximation. Influence estimations have been widely used to identify harmful instances or understand the impact of training samples. This paper goes one step further and uses this analysis to relabel the instances in order to achieve better generalization and lower test error. While the technical novelty is limited as the proposed formulations are extensions of existing influence functions and application is interesting. The authors use their relabeling strategy on multiple datasets and models (including deep models).
Pros:
(1) The paper does a focused job of using influence function for identifying harmful examples and fixing them by relabeling. While in the recent times there are papers on influence functions to identify harmful instances, this paper does a very good comprehensive and focused study compared to others.
(2) The paper is well-written and easy to follow. Related works is well laid out and well covered.
(3) Experimental section is complete with experiments on deep models and the proposed RDIA-LS. The authors acknowledge the limitations of RDIA on deep models due to erroneous influence estimates for deep models and provide a workaround for it in the Appendix. This is an improvement from the earlier versions of the paper (from a previous conference). I would definitely like to highlight this section in the main paper as it’s important for all practical purposes.
Cons:
(1) The technique of using influence function for identifying harmful instances is not new and well known and applied in the recent times. Hence I feel the technical novelty is not solid and limited in some aspects. However saying that, applications of existing influence functions are not straightforward and hence that’s one point to be noted.
In general, the paper is well-organized, a good study on influence based relabeling and has sufficient empirical studies to backup it’s proposed formulation. |
ICLR | Title
Fast and Accurate Text Classification: Skimming, Rereading and Early Stopping
Abstract
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.
1 INTRODUCTION
Recurrent neural nets (RNNs), including GRU nets (Chung et al., 2014) and LSTM nets (Hochreiter & Schmidhuber, 1997), have been increasingly applied to many problems in natural language processing. Most of the problems can be divided into two categories: sequence to sequence (seq2seq) tasks (Sutskever et al., 2014) (e.g., language modeling (Bengio et al., 2003; Mikolov et al., 2010), machine translation (Cho et al., 2014; Bahdanau et al., 2014; Kalchbrenner & Blunsom, 2013), conversational/dialogue modeling (Serban et al., 2016), question answering (Hermann et al., 2015; Weston et al., 2015; Lee et al., 2016), and document summarization (Rush et al., 2015; Nallapati et al., 2016)); and the classification tasks (e.g., part-of-speech tagging (Santos & Zadrozny, 2014), chunking, named entity recognition (Collobert et al., 2011), sentimental analysis (Socher et al., 2011), and document classification (Kim, 2014; Sebastiani, 2002)). To solve these problems, models often need to read every token or word of the text from beginning to the end, which is necessary for most seq2seq problems. However, for classification problems, we do not have to treat each individual word equally, since certain words or chunks are more relevant to the classification task at hand. For instance, for sentiment analysis it is sufficient to read the first half of a review like “this movie is amazing” or “it is the best I have ever seen,” to provide an answer even without reading the rest of the review. In other cases, we may want to skip or skim some text without carefully checking it. For example, sentences such as “it’s worth to try” are usually more important than irrelevant text such as “we got here while it’s still raining outside” or “I visited on Saturday.” On the other hand, sometimes, we want to re-read some sentences to figure out the actual hidden message of the text.
All of these techniques enable us to achieve fast and accurate reading. Similarly, we expect RNN models to intelligently determine the importance or the relevance of the current sentence in order to decide whether to make a prediction, whether to skip some texts, or whether to re-read the current sentence.
In this paper, we aim to augment existing RNN models by introducing efficient partial reading for classification, while maintaining a higher or comparable accuracy compared to reading the full text. To do so, we introduce a recurrent agent which uses an RNN module to encode information from the past and the current tokens, and applies a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. We expect that our agent will be able to achieve fast reading for classification with both high computational efficiency and good classification performance. To train this model, we develop an end-to-end approach based on the policy gradient method which backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.
We evaluate our approach on four different sentiment analysis and document topic classification datasets. By comparing to the standard RNN models and a recent LSTM-skip model which implements a skip action (Yu et al., 2017), we find that our approach achieves both higher efficiency and better accuracy.
2 METHODS
2.1 MODEL OVERVIEW
Given an input sequence x1:T with length T , our model aims to predict a single label y for the entire sequence, such as the topic or the sentiment of a document. We develop a technique for skimming, re-reading, early stopping and prediction, with the goal of (i) skipping irrelevant information and reinforcing the important parts, and (ii) to enable fast and accurate text classification. Specifically, the model will read the current token/chunk xit at time step t, encode the data xit and previous information ht−1 into a feature ht, and then decide the next token to read by skimming/skipping or to stop to form a final prediction (see Figure 1). Such a model can be fully defined on top of a RNN structure and trained in an end-to-end fashion via back-propagation of a well defined reward signal. Both skimming and re-reading actions can be defined similarly by first choosing a step size
k ∈ {0, 1, 2, · · · ,K} and then setting it+1 = it + k. When k = 0, the model rereads the current token; when k = 1, the model moves to the next token sequentially; when k > 1, the model skips the next k − 1 tokens. If the current action is to stop or the next token to read is after the last token of the input sequence text, the model will stop and output a label. All of these actions are defined by a policy module Π which takes the recurrent feature ht as input and outputs a stop signal and a label or generates a step size k and moves to the next token xit+1=it+k.
2.2 MODEL SPECIFICATION AND TRAINING
The design of the policy module Π plays an critical role in our framework. It should (i) read as much significant text as possible to ensure a confident classification output and (ii) be computationally efficient, e.g., avoiding reading to the end of the text if sufficient information is already obtained and skipping irrelevant or unimportant part of the text. More specifically, for each step, the policy module Π should decide whether the information collected is convincing enough to stop reading and make a prediction. Otherwise it will need to evaluate the importance of the current semantic unit or token just read to decide which token to be read in the next step. By formulating this process as a sequential decision process, at each time step t, the policy module takes the output ht of an encoder, which summarizes the text read before and the current token xit , and outputs a probability distribution πt defined over actions. It is worth noting that to save computation, the actions are determined only by the latent representation ht. At each time step t, a sequence of actions are generated by first sampling a stopping decision in the form of a binary variable s from a Bernoulli distribution πS(·|ht). If s = 1, the model stops and draws a label ŷ from a conditional multinomial distribution specified by a classifier πC(·|ht, s = 1); otherwise, the model draws a step size k ∈ {0, . . . ,K} from another conditional multinomial distribution πN (·|ht, s = 0) to jump to the token xit+1=it+k. Thus the probability of a sequence of actions that reads text Xi1:it = {xi1 , xi2 , ..., xit}, stops at time t, and outputs a label ŷ can be written as the joint distribution
Π(Xi1:it , ŷ) = πS(s = 1|ht)πC(ŷ|ht, s = 1) t−1∏ j=1 πS(s = 0|hj)πN (kj = ij+1 − ij |hj , s = 0),
or simply as
Π(Xi1:it , ŷ) = πS(1|ht)πC(ŷ|ht, 1) t−1∏ j=1 πS(0|hj)πN (kj |hj , 0). (1)
Hereby, kj = ij+1− ij is the step size sampled at time j which ensures the model moves from token xij to xij+1 , and hj = RNN(xij , hj−1) is computed by the RNN module.
To encourage fast and accurate text reading, we want to minimize the difference between true label and predicted while ensuring a low computational cost, which is measured by the length of the assessed text. Hence, as the reward for the last output action, we use −L (ŷ, y), where L is a loss function that measures the accuracy between predicted label ŷ and true label y. For other actions we use a negative computational cost −F . Hereby, F is the normalized FLOP count used at each time step which is approximately constant. Note that the FLOP count for the last step, Ft, differs, since it also includes the cost of the classification. Overall, the reward signal is defined as:
rj = { −L (ŷ, y)− αFt if j = t is the final time step −αF otherwise ,
where α is a trade-off parameter between accuracy and efficiency.
Assume that the entire policy Πθ is parameterized by θ = {θπS , θπC , θN , θRNN}, where θRNN subsumes the parameters for the encoder. Our final goal is to find the optimal θ which maximize the expected return defined by:
J(θ) = E(x,y)∼D ∑ t E(Xi1:it ,ŷ)∼Π t∑ j=1 γj−1rj , (2) where the first summation is used for integrating all possible sequences with different lengths to ensure the normalization of the distribution Π, and γ ∈ (0, 1) is a discount factor. It is not hard to
see that J is infeasible to compute by enumerating all possibilities in the summation and expectation. Fortunately, we can apply the policy gradient algorithm (Williams, 1992) to optimize this objective by estimating the gradient using Monte Carlo rollout samples, without doing expensive integration or enumeration. The REINFORCE policy gradient of the objective on data (x, y) can be derived as follows:
∇̂θJ = ∇θ[log πS(1|ht) + log πC(ŷ|ht, 1) + t−1∑ j=1 (log πS(0|hj) + log πN (kj |hj , 0))] t∑ j=1 γj−1rj .
Considering that the length of the rollout sequence can differ significantly, the space for policy exploration is very large, thus making the variance of the gradient estimation very high. To remedy this, we also implement the advantage actor-critic algorithm (Konda & Tsitsiklis, 2000), which couples partial future return with each action and estimates a value function as the baseline for variance reduction. We find this procedure to provide better performance than the vanilla REINFORCE algorithm.
It is worth noting that this policy gradient method eventually is able to backpropagate both classification accuracy and computational cost signals to every module in our model, including the stopping/skipping distributions, the label distribution and even the recurrent encoder, thus providing an end-to-end solution to text classification problems.
Overall, our model aims to accelerate text classification while still achieving a high accuracy. The hyperparameter α is used to control the trade-off between accuracy and time cost. If we set α to be a relatively large value, our model will be more boldly to skip tokens, stop reading and output a label. If α is small, our model would like to (re)read more tokens. Actually, the reward for penalizing the computational cost can be seen as a Lagrangian multiplier which is used to constrain the average cost of the computation. Therefore, there is a mapping between α and the amortized computational budget allocated for each sample. Given a budget, we can tune α to provide a model with best classification accuracy with the amortized cost within the budget. This is desirable for many cost-sensitive applications, such as those on mobile devices.
3 EXPERIMENTS
In this section, we illustrate our approach using two representative text classification tasks: sentiment analysis and topic classification. To perform a solid demonstration on re-reading and skimming, we conduct experiments on three different syntactic levels. We will first introduce the results on the word level before discussing character and sentence level performance.
General Experimental Settings: In our experiments, we use the IMDB and Yelp dataset for sentiment analysis, and the AG news and DBpedia for topic classification. To evaluate each classifier, we use predictive accuracy as the performance metric and average per-data floating point operations (FLOPs) as the computational cost metric. We also take the FLOPs of the policy module into account, even though they are much lower than the classifier. The energy cost for the policy module is about 1 to 2 million FLOPs per sentence, which is much smaller than the total FLOPs needed for the recurrent module and the classifier.
Hyper-parameters: We use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 in all experiments. For the recurrent network structure, we use a convolution layer with 128 kernels of size 5 and stack it as input to an LSTM with a hidden size of 128. For πS and πN policy network, we use a three hidden-layer MLP with 128 hidden units per layer. For πC and value network, we use a single-layer MLP with 128 hidden units. For all experiments, the maximal step size K is set to 3.
3.1 RESULTS
We first evaluate our method on the IMDB movie dataset (Maas et al., 2011). We randomly split it into 20,000 training, 5,000 validation and 25,000 test samples. The average length in the dataset is 240 words. We adopt the same preprocessing method as Yu et al. (2017), either padding or
truncating each sentence to 400 words. We use a chunk-size of 20 words, i.e., at each step, the classifier reads 20 words. When the action is rereading or skipping, it rereads or skips several chunks of 20 words.
To demonstrate the effectiveness of re-reading and skimming, we design three baseline models: (1) The early stopping model, which has only a stopping module to decide when to terminate reading the paragraph, the classifier and policy module are jointly trained on the entire training corpus; (2) The partial reading model, which is a classifier with same architecture trained on the truncated sentences decided by the stopping model (same as the one in the early stopping model. Thus, although the partial reading model has the same computational budget as the early stopping model, the prediction performance may differ; (3) The whole reading model, which tries to use the whole corpus as training data.
Figure 2 shows our comparison on the IMDB dataset, where the blue line indicates our proposed model while green and red one denote early-stopping model and partial reading model, respectively. The x-axis denotes the FLOP count (in millions) and the y-axis indicates the accuracy. Here the FLOP count is determined by the choice of the hyper-parameter α. As α increases, we obtain a curve indicating the trade-off between accuracy and energy cost. From this plot, we observe that both blue line and green line outperform the red line significantly. In addition, rereading and skipping further improve the performance of the model with only the early stopping mechanism. This observation implies that training the classifier jointly with the policy model improves both computational efficiency and accuracy.
Besides the word-level evaluation, we also conduct experiments on a smaller scale syntactic unit: character-level. In detail, we perform topic classification on two large-scale text datasets (Zhang et al., 2015): the AG news dataset contains four topics, 120,000 training news, 10,000 validation news, 7,600 testing news. The DBpedia dataset contains 14 topics, 560,000 training entities, 10,000 validation entities and 70,000 testing entities. The results are summarized in Figure 3. We observe that our proposed model outperforms the partial reading baseline by a significant margin.
Furthermore, we evaluate our proposed model on a larger syntactic level: sentence level. We use Yelp review sentiment analysis dataset for this experiment. The Yelp dataset includes 500,000 training reviews, 20,000 validation reviews and 40,000 testing reviews. To evaluate on the larger semantic unit, we treat each sentence as a token, which will be read sequentially by the RNN encoder. The performance is provided in Figure 4. We observe that our proposed model achieves superior performance while being significantly faster.
We summarize the obtained performance improvements in Table 1. On four different datasets and for three different syntactic levels we observe significant speedups when using the proposed techniques for skimming, rereading and early stopping, while maintaining the accuracy. A partial reading model
which has the same computational cost achieves results that are less accurate, which illustrates the benefits of a flexible model. In addition, our model achieves about 0.5-1 percent accuracy improvement compared to the full-reading model.
Finally, we compare our model to a recently published baseline (Yu et al., 2017), which only implements the skipping actions with k ∈ {1, 2, ...,K} but without rereading, and simply do early stopping when k = 0. We implemented their algorithm for a fair comparison. Results in Table 2 show that our model is much more efficient than their LSTM-skip model at the same-level of accuracy, which is marginally better than full reading baseline. These results demonstrated that our proposed rereading and skimming mechanisms are effective on a variety of text classification tasks including sentiment analysis and topic classification. Also it is effective on different level of semantics: character-level, word-level or even sentence-level. With the help of our mechanisms, we could achieve both higher accuracy and faster speed.
3.2 ABLATION ANALYSIS
In this section, we conducted an ablation study to demonstrate the effectiveness of each action mechanism in our method: skimming, rereading and early-stopping. Our experiment was performed
on the word-level IMDB dataset, and the result is presented in Figure 5. The blue curve denotes the performance of the model with all actions (skimming, rereading and early-stopping) enabled. The green one denotes the performance of the model with only the early-stopping actions. Between these two curves, the red curve represents a model with rereading and early-stopping action, and the yellow line represents a model with skimming and early-stopping actions. Note that the performance of the green curve is the worst, indicating that rereading and skimming mechanisms are necessary. Furthermore, the blue curve is better than all other ones, indicating that combining skimming and rereading together can further improve the performance of the policy model.
3.3 CASE STUDIES
To obtain a more detailed understanding of our model, we first show the actions taken by our model on a sentiment analysis example (Figure 6), on which the LSTM full-reading model failed to give the right classification. We show the degree of positiveness given by LSTM model encoded in color, from green representing positiveness to brown representing negativeness.
The paragraph starts with a sentence with strong positiveness of a dinner, then followed by a few sentences on some confusing description on the dinner. Many trivial or even negative words show up in the explanation. As a result, the output of the full-reading model gradually changes from positive to negative and finally results in a negative signal. Importantly, after our model reads the first two sentences, the policy module decides that it is confident enough to make a decision yielding the correct answer.
Next we illustrate how the rereading and skimming actions are useful to identify important information in the text. As shown in Figure 7, our model first reads a key word “stake” and is confident that the document is about money. Then it skims a few irrelevant tokens to read about “buying stake in Biotechnology” the following two tokens. The phrase “5 percent stake” showed up twice. Our model consider it to be important so it re-reads this token. At this time, the model basically knows this text is about business with a reasonable confidence. Then it skips to read about “collaboration deal” and stops to make a confident prediction.
4 RELATED WORK
The idea of improving time efficiency with adaptive computation has been studied extensively and throughout the years Weiss & Taskar (2013). For example, the adaptive computation time algorithm Graves (2016) on recurrent neural networks proposed to utilize early stopping action to save
computational cost. Spatially adaptive computation time Figurnov et al. (2016) was proposed for image classification and object detection tasks. Compared to their work, our model is powerful by utilizing the combinatorial complexity of actions.
Attention mechanism applied to text data are related. ReasonNet Shen et al. (2017) trains a policy module to determine whether to stop before accessing the full text on question-answering tasks. Similarly, the model of Dulac-Arnold et al. (2011) performs early-stopping on text classification tasks. Comparing with these related work, our proposed model’s skimming and rereading mechanisms are innovative. In addition, Choi et al. (2016) and Lei et al. (2016) propose to select the relevant sentences which are critical for question answering and sentiment analysis, respectively. Their methods utilize prediction accuracy as the reward signal to train the policy module. However, in our work, the policy module is trained considering both accuracy and computational cost explicitly.
Other ways to reduce the inference computational cost for new examples have been considered. Bengio et al. (2015) proposes a scheme to selectively activate parts of the network. Bolukbasi et al. (2017) presents two schemes to adaptively utilize the network during inference: Given each data point, they first select a network and then select some components of that network.
One closely related work is Yu et al. (2017). The authors train their policy network end-to-end with reinforcement learning. In contrast to their work, our model implemented human-like mechanism rereading and separated early-stopping mechanism, thus leading to further improved efficiency and accuracy. Furthermore, we hardly rely on many hyper-parameters and only use a simple reward structure. Finally, we get an advanced performance with better reward design which incorporates the negative energy cost explicitly and implement a value network to reduce the variance.
5 CONCLUSIONS
We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks. By mimicking human fast reading, we introduce a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. An endto-end training algorithm based on the policy gradient method backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder. We demonstrate the efficacy of the proposed approach on four different datasets and demonstrate improvements for both accuracy and computational performance.
6 APPENDIX
6.A COMPARISON OF DIFFERENT CHUNK SIZE
To illustrate that our model’s performance is robust to the choice of chunk size, we investigate the model performance with a variety of chunk sizes on the IMDB dataset. The result is shown in Figure 8. Here the red curve denotes the performance of the partial reading baseline, and the other three curves denote the performance of our full-action model with three chunk sizes 8, 20, 40, respectively. It is clear that our model outperformes the baselines significantly with different choices of chunk size.
In addition, we found that if the chunk size is too small, there are more decision steps inside each sentence, resulting the policy optimization more difficult. For instance, the performance of the chunk size 8 seems worse than two larger chunk sizes. We believe this issue may be overcome by applying more advanced policy optimization algorithms such as proximal policy optimization (Schulman et al., 2017). On the other hand, if the chunk size is too large, there were fewer decision steps, making the model not flexible enough. Among all three choices, we found that setting chunk size 20 leads to best practice in our experiments. | 1. What is the main contribution of the paper, and how does it aim to improve text classification tasks?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to skip certain tokens and stop earlier?
3. How does the reviewer assess the significance of the work compared to prior research, such as Yu et al. (2017)?
4. What are the limitations of the paper, including the assumption that the model can handle the earlier parts of the text better?
5. How does the reviewer evaluate the effectiveness of the experiments and the trade-off between savings and accuracy?
6. Are there any questions or concerns regarding the choice of dataset, the use of smaller chunks, and the advantage actor-critic method?
7. How does the reviewer assess the clarity and quality of the writing in the paper? | Review | Review
This paper proposes to augment RNNs for text classification with a mechanism that decides whether the RNN should re-read a token, skip a number of tokens, or stop and output the prediction. The motivation is that one can stop reading before the end of the text and/or skip some words and still arrive to the same answer but faster.
The idea is intriguing, even though not entirely novel. Apart from the Yu et al. (2017) cited, there is older work trying to save computational time in NLP, e.g.:
Dynamic Feature Selection for Dependency Parsing.
He He, Hal Daumé III and Jason Eisner.
Empirical Methods in Natural Language Processing (EMNLP), 2013
that decides whether to extract a feature or not.
However, what is not clear to me what is achieved here. In the example shown in Figure 5 it seems like what happens is that by not reading the whole text the model avoids passages that might be confusing it. This could improve predictive accuracy (as it seems to do), as long as the model can handle better the earlier part of the text. But this is just an assumption, which is not guaranteed in any way. It could be that the earlier parts of the text are hard for the model. In a way, it feels more like we are addressing a limitation of RNN models in understanding text.
Pros:
- The idea is intersting and if evaluated thoroughly it could be quite influential.
Cons:
- the introduction states that there are two kinds of NLP problems, sequence2sequence and sequence2scalar. I found this rather confusing since text classification falls in the latter presumably, but the output is a label. Similarly, PoS tagging has a linear chain as its output, can't see why it is sequence2scalar. I think there is a confusion between the methods used for a task, and the task itself. Being able to apply a sequence-based model to a task, doesn't make it sequential necessarily.
- the comparison in terms of FLOPs is a good idea. But wouldn't the relative savings depend on the architecture used for the RNN and the RL agent? E.g. it could be that the RL network is more complicated and using it costs more than what it saves in the RNN operations.
- While table 2 reports the savings vs the full reading model, we don't know how much worse the model got for these savings.
- Having a trade-off between savings and accuracy is a good idea too. I would have liked to see an experiment showing how many FLOPs we can save for the same performance, which should be achievable by adjusting the alpha parameter.
- The experiments are conducted on previously published datasets. It would be good to have some previously published results on them to get a sense of how good the RNN model used is.
- Why not use smaller chunks? 20 words or one sentence at the time is rather coarse. If anything, it should help the model proposed achieve greater savings. How much does the choice of chunk matter?
- it is stated in the conclusion that the advantage actor-critic used is beneficial, however no experimental comparison is shown. Was it used for the Yu et al. baseline too?
- It is stated that model hardly relies on any hyperparameters; in comparison to what? It is better to quantify such statements, |
ICLR | Title
Fast and Accurate Text Classification: Skimming, Rereading and Early Stopping
Abstract
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.
1 INTRODUCTION
Recurrent neural nets (RNNs), including GRU nets (Chung et al., 2014) and LSTM nets (Hochreiter & Schmidhuber, 1997), have been increasingly applied to many problems in natural language processing. Most of the problems can be divided into two categories: sequence to sequence (seq2seq) tasks (Sutskever et al., 2014) (e.g., language modeling (Bengio et al., 2003; Mikolov et al., 2010), machine translation (Cho et al., 2014; Bahdanau et al., 2014; Kalchbrenner & Blunsom, 2013), conversational/dialogue modeling (Serban et al., 2016), question answering (Hermann et al., 2015; Weston et al., 2015; Lee et al., 2016), and document summarization (Rush et al., 2015; Nallapati et al., 2016)); and the classification tasks (e.g., part-of-speech tagging (Santos & Zadrozny, 2014), chunking, named entity recognition (Collobert et al., 2011), sentimental analysis (Socher et al., 2011), and document classification (Kim, 2014; Sebastiani, 2002)). To solve these problems, models often need to read every token or word of the text from beginning to the end, which is necessary for most seq2seq problems. However, for classification problems, we do not have to treat each individual word equally, since certain words or chunks are more relevant to the classification task at hand. For instance, for sentiment analysis it is sufficient to read the first half of a review like “this movie is amazing” or “it is the best I have ever seen,” to provide an answer even without reading the rest of the review. In other cases, we may want to skip or skim some text without carefully checking it. For example, sentences such as “it’s worth to try” are usually more important than irrelevant text such as “we got here while it’s still raining outside” or “I visited on Saturday.” On the other hand, sometimes, we want to re-read some sentences to figure out the actual hidden message of the text.
All of these techniques enable us to achieve fast and accurate reading. Similarly, we expect RNN models to intelligently determine the importance or the relevance of the current sentence in order to decide whether to make a prediction, whether to skip some texts, or whether to re-read the current sentence.
In this paper, we aim to augment existing RNN models by introducing efficient partial reading for classification, while maintaining a higher or comparable accuracy compared to reading the full text. To do so, we introduce a recurrent agent which uses an RNN module to encode information from the past and the current tokens, and applies a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. We expect that our agent will be able to achieve fast reading for classification with both high computational efficiency and good classification performance. To train this model, we develop an end-to-end approach based on the policy gradient method which backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.
We evaluate our approach on four different sentiment analysis and document topic classification datasets. By comparing to the standard RNN models and a recent LSTM-skip model which implements a skip action (Yu et al., 2017), we find that our approach achieves both higher efficiency and better accuracy.
2 METHODS
2.1 MODEL OVERVIEW
Given an input sequence x1:T with length T , our model aims to predict a single label y for the entire sequence, such as the topic or the sentiment of a document. We develop a technique for skimming, re-reading, early stopping and prediction, with the goal of (i) skipping irrelevant information and reinforcing the important parts, and (ii) to enable fast and accurate text classification. Specifically, the model will read the current token/chunk xit at time step t, encode the data xit and previous information ht−1 into a feature ht, and then decide the next token to read by skimming/skipping or to stop to form a final prediction (see Figure 1). Such a model can be fully defined on top of a RNN structure and trained in an end-to-end fashion via back-propagation of a well defined reward signal. Both skimming and re-reading actions can be defined similarly by first choosing a step size
k ∈ {0, 1, 2, · · · ,K} and then setting it+1 = it + k. When k = 0, the model rereads the current token; when k = 1, the model moves to the next token sequentially; when k > 1, the model skips the next k − 1 tokens. If the current action is to stop or the next token to read is after the last token of the input sequence text, the model will stop and output a label. All of these actions are defined by a policy module Π which takes the recurrent feature ht as input and outputs a stop signal and a label or generates a step size k and moves to the next token xit+1=it+k.
2.2 MODEL SPECIFICATION AND TRAINING
The design of the policy module Π plays an critical role in our framework. It should (i) read as much significant text as possible to ensure a confident classification output and (ii) be computationally efficient, e.g., avoiding reading to the end of the text if sufficient information is already obtained and skipping irrelevant or unimportant part of the text. More specifically, for each step, the policy module Π should decide whether the information collected is convincing enough to stop reading and make a prediction. Otherwise it will need to evaluate the importance of the current semantic unit or token just read to decide which token to be read in the next step. By formulating this process as a sequential decision process, at each time step t, the policy module takes the output ht of an encoder, which summarizes the text read before and the current token xit , and outputs a probability distribution πt defined over actions. It is worth noting that to save computation, the actions are determined only by the latent representation ht. At each time step t, a sequence of actions are generated by first sampling a stopping decision in the form of a binary variable s from a Bernoulli distribution πS(·|ht). If s = 1, the model stops and draws a label ŷ from a conditional multinomial distribution specified by a classifier πC(·|ht, s = 1); otherwise, the model draws a step size k ∈ {0, . . . ,K} from another conditional multinomial distribution πN (·|ht, s = 0) to jump to the token xit+1=it+k. Thus the probability of a sequence of actions that reads text Xi1:it = {xi1 , xi2 , ..., xit}, stops at time t, and outputs a label ŷ can be written as the joint distribution
Π(Xi1:it , ŷ) = πS(s = 1|ht)πC(ŷ|ht, s = 1) t−1∏ j=1 πS(s = 0|hj)πN (kj = ij+1 − ij |hj , s = 0),
or simply as
Π(Xi1:it , ŷ) = πS(1|ht)πC(ŷ|ht, 1) t−1∏ j=1 πS(0|hj)πN (kj |hj , 0). (1)
Hereby, kj = ij+1− ij is the step size sampled at time j which ensures the model moves from token xij to xij+1 , and hj = RNN(xij , hj−1) is computed by the RNN module.
To encourage fast and accurate text reading, we want to minimize the difference between true label and predicted while ensuring a low computational cost, which is measured by the length of the assessed text. Hence, as the reward for the last output action, we use −L (ŷ, y), where L is a loss function that measures the accuracy between predicted label ŷ and true label y. For other actions we use a negative computational cost −F . Hereby, F is the normalized FLOP count used at each time step which is approximately constant. Note that the FLOP count for the last step, Ft, differs, since it also includes the cost of the classification. Overall, the reward signal is defined as:
rj = { −L (ŷ, y)− αFt if j = t is the final time step −αF otherwise ,
where α is a trade-off parameter between accuracy and efficiency.
Assume that the entire policy Πθ is parameterized by θ = {θπS , θπC , θN , θRNN}, where θRNN subsumes the parameters for the encoder. Our final goal is to find the optimal θ which maximize the expected return defined by:
J(θ) = E(x,y)∼D ∑ t E(Xi1:it ,ŷ)∼Π t∑ j=1 γj−1rj , (2) where the first summation is used for integrating all possible sequences with different lengths to ensure the normalization of the distribution Π, and γ ∈ (0, 1) is a discount factor. It is not hard to
see that J is infeasible to compute by enumerating all possibilities in the summation and expectation. Fortunately, we can apply the policy gradient algorithm (Williams, 1992) to optimize this objective by estimating the gradient using Monte Carlo rollout samples, without doing expensive integration or enumeration. The REINFORCE policy gradient of the objective on data (x, y) can be derived as follows:
∇̂θJ = ∇θ[log πS(1|ht) + log πC(ŷ|ht, 1) + t−1∑ j=1 (log πS(0|hj) + log πN (kj |hj , 0))] t∑ j=1 γj−1rj .
Considering that the length of the rollout sequence can differ significantly, the space for policy exploration is very large, thus making the variance of the gradient estimation very high. To remedy this, we also implement the advantage actor-critic algorithm (Konda & Tsitsiklis, 2000), which couples partial future return with each action and estimates a value function as the baseline for variance reduction. We find this procedure to provide better performance than the vanilla REINFORCE algorithm.
It is worth noting that this policy gradient method eventually is able to backpropagate both classification accuracy and computational cost signals to every module in our model, including the stopping/skipping distributions, the label distribution and even the recurrent encoder, thus providing an end-to-end solution to text classification problems.
Overall, our model aims to accelerate text classification while still achieving a high accuracy. The hyperparameter α is used to control the trade-off between accuracy and time cost. If we set α to be a relatively large value, our model will be more boldly to skip tokens, stop reading and output a label. If α is small, our model would like to (re)read more tokens. Actually, the reward for penalizing the computational cost can be seen as a Lagrangian multiplier which is used to constrain the average cost of the computation. Therefore, there is a mapping between α and the amortized computational budget allocated for each sample. Given a budget, we can tune α to provide a model with best classification accuracy with the amortized cost within the budget. This is desirable for many cost-sensitive applications, such as those on mobile devices.
3 EXPERIMENTS
In this section, we illustrate our approach using two representative text classification tasks: sentiment analysis and topic classification. To perform a solid demonstration on re-reading and skimming, we conduct experiments on three different syntactic levels. We will first introduce the results on the word level before discussing character and sentence level performance.
General Experimental Settings: In our experiments, we use the IMDB and Yelp dataset for sentiment analysis, and the AG news and DBpedia for topic classification. To evaluate each classifier, we use predictive accuracy as the performance metric and average per-data floating point operations (FLOPs) as the computational cost metric. We also take the FLOPs of the policy module into account, even though they are much lower than the classifier. The energy cost for the policy module is about 1 to 2 million FLOPs per sentence, which is much smaller than the total FLOPs needed for the recurrent module and the classifier.
Hyper-parameters: We use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 in all experiments. For the recurrent network structure, we use a convolution layer with 128 kernels of size 5 and stack it as input to an LSTM with a hidden size of 128. For πS and πN policy network, we use a three hidden-layer MLP with 128 hidden units per layer. For πC and value network, we use a single-layer MLP with 128 hidden units. For all experiments, the maximal step size K is set to 3.
3.1 RESULTS
We first evaluate our method on the IMDB movie dataset (Maas et al., 2011). We randomly split it into 20,000 training, 5,000 validation and 25,000 test samples. The average length in the dataset is 240 words. We adopt the same preprocessing method as Yu et al. (2017), either padding or
truncating each sentence to 400 words. We use a chunk-size of 20 words, i.e., at each step, the classifier reads 20 words. When the action is rereading or skipping, it rereads or skips several chunks of 20 words.
To demonstrate the effectiveness of re-reading and skimming, we design three baseline models: (1) The early stopping model, which has only a stopping module to decide when to terminate reading the paragraph, the classifier and policy module are jointly trained on the entire training corpus; (2) The partial reading model, which is a classifier with same architecture trained on the truncated sentences decided by the stopping model (same as the one in the early stopping model. Thus, although the partial reading model has the same computational budget as the early stopping model, the prediction performance may differ; (3) The whole reading model, which tries to use the whole corpus as training data.
Figure 2 shows our comparison on the IMDB dataset, where the blue line indicates our proposed model while green and red one denote early-stopping model and partial reading model, respectively. The x-axis denotes the FLOP count (in millions) and the y-axis indicates the accuracy. Here the FLOP count is determined by the choice of the hyper-parameter α. As α increases, we obtain a curve indicating the trade-off between accuracy and energy cost. From this plot, we observe that both blue line and green line outperform the red line significantly. In addition, rereading and skipping further improve the performance of the model with only the early stopping mechanism. This observation implies that training the classifier jointly with the policy model improves both computational efficiency and accuracy.
Besides the word-level evaluation, we also conduct experiments on a smaller scale syntactic unit: character-level. In detail, we perform topic classification on two large-scale text datasets (Zhang et al., 2015): the AG news dataset contains four topics, 120,000 training news, 10,000 validation news, 7,600 testing news. The DBpedia dataset contains 14 topics, 560,000 training entities, 10,000 validation entities and 70,000 testing entities. The results are summarized in Figure 3. We observe that our proposed model outperforms the partial reading baseline by a significant margin.
Furthermore, we evaluate our proposed model on a larger syntactic level: sentence level. We use Yelp review sentiment analysis dataset for this experiment. The Yelp dataset includes 500,000 training reviews, 20,000 validation reviews and 40,000 testing reviews. To evaluate on the larger semantic unit, we treat each sentence as a token, which will be read sequentially by the RNN encoder. The performance is provided in Figure 4. We observe that our proposed model achieves superior performance while being significantly faster.
We summarize the obtained performance improvements in Table 1. On four different datasets and for three different syntactic levels we observe significant speedups when using the proposed techniques for skimming, rereading and early stopping, while maintaining the accuracy. A partial reading model
which has the same computational cost achieves results that are less accurate, which illustrates the benefits of a flexible model. In addition, our model achieves about 0.5-1 percent accuracy improvement compared to the full-reading model.
Finally, we compare our model to a recently published baseline (Yu et al., 2017), which only implements the skipping actions with k ∈ {1, 2, ...,K} but without rereading, and simply do early stopping when k = 0. We implemented their algorithm for a fair comparison. Results in Table 2 show that our model is much more efficient than their LSTM-skip model at the same-level of accuracy, which is marginally better than full reading baseline. These results demonstrated that our proposed rereading and skimming mechanisms are effective on a variety of text classification tasks including sentiment analysis and topic classification. Also it is effective on different level of semantics: character-level, word-level or even sentence-level. With the help of our mechanisms, we could achieve both higher accuracy and faster speed.
3.2 ABLATION ANALYSIS
In this section, we conducted an ablation study to demonstrate the effectiveness of each action mechanism in our method: skimming, rereading and early-stopping. Our experiment was performed
on the word-level IMDB dataset, and the result is presented in Figure 5. The blue curve denotes the performance of the model with all actions (skimming, rereading and early-stopping) enabled. The green one denotes the performance of the model with only the early-stopping actions. Between these two curves, the red curve represents a model with rereading and early-stopping action, and the yellow line represents a model with skimming and early-stopping actions. Note that the performance of the green curve is the worst, indicating that rereading and skimming mechanisms are necessary. Furthermore, the blue curve is better than all other ones, indicating that combining skimming and rereading together can further improve the performance of the policy model.
3.3 CASE STUDIES
To obtain a more detailed understanding of our model, we first show the actions taken by our model on a sentiment analysis example (Figure 6), on which the LSTM full-reading model failed to give the right classification. We show the degree of positiveness given by LSTM model encoded in color, from green representing positiveness to brown representing negativeness.
The paragraph starts with a sentence with strong positiveness of a dinner, then followed by a few sentences on some confusing description on the dinner. Many trivial or even negative words show up in the explanation. As a result, the output of the full-reading model gradually changes from positive to negative and finally results in a negative signal. Importantly, after our model reads the first two sentences, the policy module decides that it is confident enough to make a decision yielding the correct answer.
Next we illustrate how the rereading and skimming actions are useful to identify important information in the text. As shown in Figure 7, our model first reads a key word “stake” and is confident that the document is about money. Then it skims a few irrelevant tokens to read about “buying stake in Biotechnology” the following two tokens. The phrase “5 percent stake” showed up twice. Our model consider it to be important so it re-reads this token. At this time, the model basically knows this text is about business with a reasonable confidence. Then it skips to read about “collaboration deal” and stops to make a confident prediction.
4 RELATED WORK
The idea of improving time efficiency with adaptive computation has been studied extensively and throughout the years Weiss & Taskar (2013). For example, the adaptive computation time algorithm Graves (2016) on recurrent neural networks proposed to utilize early stopping action to save
computational cost. Spatially adaptive computation time Figurnov et al. (2016) was proposed for image classification and object detection tasks. Compared to their work, our model is powerful by utilizing the combinatorial complexity of actions.
Attention mechanism applied to text data are related. ReasonNet Shen et al. (2017) trains a policy module to determine whether to stop before accessing the full text on question-answering tasks. Similarly, the model of Dulac-Arnold et al. (2011) performs early-stopping on text classification tasks. Comparing with these related work, our proposed model’s skimming and rereading mechanisms are innovative. In addition, Choi et al. (2016) and Lei et al. (2016) propose to select the relevant sentences which are critical for question answering and sentiment analysis, respectively. Their methods utilize prediction accuracy as the reward signal to train the policy module. However, in our work, the policy module is trained considering both accuracy and computational cost explicitly.
Other ways to reduce the inference computational cost for new examples have been considered. Bengio et al. (2015) proposes a scheme to selectively activate parts of the network. Bolukbasi et al. (2017) presents two schemes to adaptively utilize the network during inference: Given each data point, they first select a network and then select some components of that network.
One closely related work is Yu et al. (2017). The authors train their policy network end-to-end with reinforcement learning. In contrast to their work, our model implemented human-like mechanism rereading and separated early-stopping mechanism, thus leading to further improved efficiency and accuracy. Furthermore, we hardly rely on many hyper-parameters and only use a simple reward structure. Finally, we get an advanced performance with better reward design which incorporates the negative energy cost explicitly and implement a value network to reduce the variance.
5 CONCLUSIONS
We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks. By mimicking human fast reading, we introduce a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. An endto-end training algorithm based on the policy gradient method backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder. We demonstrate the efficacy of the proposed approach on four different datasets and demonstrate improvements for both accuracy and computational performance.
6 APPENDIX
6.A COMPARISON OF DIFFERENT CHUNK SIZE
To illustrate that our model’s performance is robust to the choice of chunk size, we investigate the model performance with a variety of chunk sizes on the IMDB dataset. The result is shown in Figure 8. Here the red curve denotes the performance of the partial reading baseline, and the other three curves denote the performance of our full-action model with three chunk sizes 8, 20, 40, respectively. It is clear that our model outperformes the baselines significantly with different choices of chunk size.
In addition, we found that if the chunk size is too small, there are more decision steps inside each sentence, resulting the policy optimization more difficult. For instance, the performance of the chunk size 8 seems worse than two larger chunk sizes. We believe this issue may be overcome by applying more advanced policy optimization algorithms such as proximal policy optimization (Schulman et al., 2017). On the other hand, if the chunk size is too large, there were fewer decision steps, making the model not flexible enough. Among all three choices, we found that setting chunk size 20 leads to best practice in our experiments. | 1. What is the main contribution of the paper regarding text classification and sequential decision-making?
2. How does the proposed method differ from prior works, specifically Yu et al. (2017), in terms of early stopping and rereading abilities?
3. What are the strengths and weaknesses of the proposed framework, particularly in terms of its ability to reduce the decision budget while maintaining accuracy?
4. How does the model perform on sentiment analysis tasks compared to state-of-the-art methods?
5. What is the significance of the unified framework proposed by the authors, and how does it relate to previous works in this area?
6. Can you provide more information or clarification on the following points:
* The decision to stop being taken before considering the label probability distribution
* The interest of rereading a word/sentence
* The difference between the early stopping model and partial reading model
* The impact of rereading on performance
* The number of times the algorithm chooses to reread
* Experiments comparing early-stopping VS early-stopping + skipping + rereading
* Comparison with Beermind system and associated publication
* Positioning of the paper's approach relative to the literature
* Comparison with state-of-the-art models on considered task | Review | Review
The authors propose a sequential algorithm that tackles text classification while introducing the ability to stop the reading when the decision to make is confident enough. This sequential framework -reinforcement learning with budget constraint- will be applied to document classification tasks. The authors propose a unified framework enabling the recurrent network to reread or skip some parts of a document. Then, the authors describe the process to ensure both a good classification ability & a reduced budget.
Experiments are conducted on the IMDB dataset and the authors demonstrate the interest to select the relevant part of the document to make their decision. They improve both the accuracy & decision budget.
In the architecture, fig 1, it is strange to see that the decision to stop is taken before considering the label probability distribution. This choice is probably made to fit with classical sequential decision algorithms, assuming that the confidence level can be extracted from the latent representation... However, it should be discussed.
The interest of rereading a word/sentence is not clear for me: we simply choose to overweight the recent past wrt the further. Can it be seen as a way to overcome a weakness in the information extraction?
At the end of page 4, the difference between the early stopping model & partial reading model is not clear for me. How can the partial reading model be overcome by the early-stopping approach? They operate on the same data, with the same computational cost (ie with the same algorithm?)
At the end of page 6, authors claim that their advantage over Yu et al. 2017 comes from their rereading & early stopping abilities:
- given the length of the reviews may the K-skip ability of Yu et al. 2017 be seen as an early stopping approach?
- are the authors confident about the implementation of the Yu et al. 2017' strategy?
- Regarding the re-reading ability: the experimental section is very poor and we wonder:
-- how the performance is impacted by rereading?
-- how many time does the algorithm choose to reread?
-- experiments on early-stopping VS early-stopping + skipping + rereading are interesting... We want to see the impact of the other aspects of the contribution.
On the sentiment analysis task, how does the model behave wrt the state of the art?
Given the chosen tasks, this work should be compared to the beermind system:
http://deepx.ucsd.edu/#/home/beermind
and the associated publication
http://arxiv.org/pdf/1511.03683.pdf
But the authors should also refer to previous work on their topic:
https://arxiv.org/pdf/1107.1322
The above mentioned reference is really close to their work.
This article describes an interesting approach but its main weakness resides in the lack of positioning wrt the literature and the lack of comparison with state-of-the-art models on the considered tasks. |
ICLR | Title
Fast and Accurate Text Classification: Skimming, Rereading and Early Stopping
Abstract
Recent advances in recurrent neural nets (RNNs) have shown much promise in many applications in natural language processing. For most of these tasks, such as sentiment analysis of customer reviews, a recurrent neural net model parses the entire review before forming a decision. We argue that reading the entire input is not always necessary in practice, since a lot of reviews are often easy to classify, i.e., a decision can be formed after reading some crucial sentences or words in the provided text. In this paper, we present an approach of fast reading for text classification. Inspired by several well-known human reading techniques, our approach implements an intelligent recurrent agent which evaluates the importance of the current snippet in order to decide whether to make a prediction, or to skip some texts, or to re-read part of the sentence. Our agent uses an RNN module to encode information from the past and the current tokens, and applies a policy module to form decisions. With an end-to-end training algorithm based on policy gradient, we train and test our agent on several text classification datasets and achieve both higher efficiency and better accuracy compared to previous approaches.
1 INTRODUCTION
Recurrent neural nets (RNNs), including GRU nets (Chung et al., 2014) and LSTM nets (Hochreiter & Schmidhuber, 1997), have been increasingly applied to many problems in natural language processing. Most of the problems can be divided into two categories: sequence to sequence (seq2seq) tasks (Sutskever et al., 2014) (e.g., language modeling (Bengio et al., 2003; Mikolov et al., 2010), machine translation (Cho et al., 2014; Bahdanau et al., 2014; Kalchbrenner & Blunsom, 2013), conversational/dialogue modeling (Serban et al., 2016), question answering (Hermann et al., 2015; Weston et al., 2015; Lee et al., 2016), and document summarization (Rush et al., 2015; Nallapati et al., 2016)); and the classification tasks (e.g., part-of-speech tagging (Santos & Zadrozny, 2014), chunking, named entity recognition (Collobert et al., 2011), sentimental analysis (Socher et al., 2011), and document classification (Kim, 2014; Sebastiani, 2002)). To solve these problems, models often need to read every token or word of the text from beginning to the end, which is necessary for most seq2seq problems. However, for classification problems, we do not have to treat each individual word equally, since certain words or chunks are more relevant to the classification task at hand. For instance, for sentiment analysis it is sufficient to read the first half of a review like “this movie is amazing” or “it is the best I have ever seen,” to provide an answer even without reading the rest of the review. In other cases, we may want to skip or skim some text without carefully checking it. For example, sentences such as “it’s worth to try” are usually more important than irrelevant text such as “we got here while it’s still raining outside” or “I visited on Saturday.” On the other hand, sometimes, we want to re-read some sentences to figure out the actual hidden message of the text.
All of these techniques enable us to achieve fast and accurate reading. Similarly, we expect RNN models to intelligently determine the importance or the relevance of the current sentence in order to decide whether to make a prediction, whether to skip some texts, or whether to re-read the current sentence.
In this paper, we aim to augment existing RNN models by introducing efficient partial reading for classification, while maintaining a higher or comparable accuracy compared to reading the full text. To do so, we introduce a recurrent agent which uses an RNN module to encode information from the past and the current tokens, and applies a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. We expect that our agent will be able to achieve fast reading for classification with both high computational efficiency and good classification performance. To train this model, we develop an end-to-end approach based on the policy gradient method which backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder.
We evaluate our approach on four different sentiment analysis and document topic classification datasets. By comparing to the standard RNN models and a recent LSTM-skip model which implements a skip action (Yu et al., 2017), we find that our approach achieves both higher efficiency and better accuracy.
2 METHODS
2.1 MODEL OVERVIEW
Given an input sequence x1:T with length T , our model aims to predict a single label y for the entire sequence, such as the topic or the sentiment of a document. We develop a technique for skimming, re-reading, early stopping and prediction, with the goal of (i) skipping irrelevant information and reinforcing the important parts, and (ii) to enable fast and accurate text classification. Specifically, the model will read the current token/chunk xit at time step t, encode the data xit and previous information ht−1 into a feature ht, and then decide the next token to read by skimming/skipping or to stop to form a final prediction (see Figure 1). Such a model can be fully defined on top of a RNN structure and trained in an end-to-end fashion via back-propagation of a well defined reward signal. Both skimming and re-reading actions can be defined similarly by first choosing a step size
k ∈ {0, 1, 2, · · · ,K} and then setting it+1 = it + k. When k = 0, the model rereads the current token; when k = 1, the model moves to the next token sequentially; when k > 1, the model skips the next k − 1 tokens. If the current action is to stop or the next token to read is after the last token of the input sequence text, the model will stop and output a label. All of these actions are defined by a policy module Π which takes the recurrent feature ht as input and outputs a stop signal and a label or generates a step size k and moves to the next token xit+1=it+k.
2.2 MODEL SPECIFICATION AND TRAINING
The design of the policy module Π plays an critical role in our framework. It should (i) read as much significant text as possible to ensure a confident classification output and (ii) be computationally efficient, e.g., avoiding reading to the end of the text if sufficient information is already obtained and skipping irrelevant or unimportant part of the text. More specifically, for each step, the policy module Π should decide whether the information collected is convincing enough to stop reading and make a prediction. Otherwise it will need to evaluate the importance of the current semantic unit or token just read to decide which token to be read in the next step. By formulating this process as a sequential decision process, at each time step t, the policy module takes the output ht of an encoder, which summarizes the text read before and the current token xit , and outputs a probability distribution πt defined over actions. It is worth noting that to save computation, the actions are determined only by the latent representation ht. At each time step t, a sequence of actions are generated by first sampling a stopping decision in the form of a binary variable s from a Bernoulli distribution πS(·|ht). If s = 1, the model stops and draws a label ŷ from a conditional multinomial distribution specified by a classifier πC(·|ht, s = 1); otherwise, the model draws a step size k ∈ {0, . . . ,K} from another conditional multinomial distribution πN (·|ht, s = 0) to jump to the token xit+1=it+k. Thus the probability of a sequence of actions that reads text Xi1:it = {xi1 , xi2 , ..., xit}, stops at time t, and outputs a label ŷ can be written as the joint distribution
Π(Xi1:it , ŷ) = πS(s = 1|ht)πC(ŷ|ht, s = 1) t−1∏ j=1 πS(s = 0|hj)πN (kj = ij+1 − ij |hj , s = 0),
or simply as
Π(Xi1:it , ŷ) = πS(1|ht)πC(ŷ|ht, 1) t−1∏ j=1 πS(0|hj)πN (kj |hj , 0). (1)
Hereby, kj = ij+1− ij is the step size sampled at time j which ensures the model moves from token xij to xij+1 , and hj = RNN(xij , hj−1) is computed by the RNN module.
To encourage fast and accurate text reading, we want to minimize the difference between true label and predicted while ensuring a low computational cost, which is measured by the length of the assessed text. Hence, as the reward for the last output action, we use −L (ŷ, y), where L is a loss function that measures the accuracy between predicted label ŷ and true label y. For other actions we use a negative computational cost −F . Hereby, F is the normalized FLOP count used at each time step which is approximately constant. Note that the FLOP count for the last step, Ft, differs, since it also includes the cost of the classification. Overall, the reward signal is defined as:
rj = { −L (ŷ, y)− αFt if j = t is the final time step −αF otherwise ,
where α is a trade-off parameter between accuracy and efficiency.
Assume that the entire policy Πθ is parameterized by θ = {θπS , θπC , θN , θRNN}, where θRNN subsumes the parameters for the encoder. Our final goal is to find the optimal θ which maximize the expected return defined by:
J(θ) = E(x,y)∼D ∑ t E(Xi1:it ,ŷ)∼Π t∑ j=1 γj−1rj , (2) where the first summation is used for integrating all possible sequences with different lengths to ensure the normalization of the distribution Π, and γ ∈ (0, 1) is a discount factor. It is not hard to
see that J is infeasible to compute by enumerating all possibilities in the summation and expectation. Fortunately, we can apply the policy gradient algorithm (Williams, 1992) to optimize this objective by estimating the gradient using Monte Carlo rollout samples, without doing expensive integration or enumeration. The REINFORCE policy gradient of the objective on data (x, y) can be derived as follows:
∇̂θJ = ∇θ[log πS(1|ht) + log πC(ŷ|ht, 1) + t−1∑ j=1 (log πS(0|hj) + log πN (kj |hj , 0))] t∑ j=1 γj−1rj .
Considering that the length of the rollout sequence can differ significantly, the space for policy exploration is very large, thus making the variance of the gradient estimation very high. To remedy this, we also implement the advantage actor-critic algorithm (Konda & Tsitsiklis, 2000), which couples partial future return with each action and estimates a value function as the baseline for variance reduction. We find this procedure to provide better performance than the vanilla REINFORCE algorithm.
It is worth noting that this policy gradient method eventually is able to backpropagate both classification accuracy and computational cost signals to every module in our model, including the stopping/skipping distributions, the label distribution and even the recurrent encoder, thus providing an end-to-end solution to text classification problems.
Overall, our model aims to accelerate text classification while still achieving a high accuracy. The hyperparameter α is used to control the trade-off between accuracy and time cost. If we set α to be a relatively large value, our model will be more boldly to skip tokens, stop reading and output a label. If α is small, our model would like to (re)read more tokens. Actually, the reward for penalizing the computational cost can be seen as a Lagrangian multiplier which is used to constrain the average cost of the computation. Therefore, there is a mapping between α and the amortized computational budget allocated for each sample. Given a budget, we can tune α to provide a model with best classification accuracy with the amortized cost within the budget. This is desirable for many cost-sensitive applications, such as those on mobile devices.
3 EXPERIMENTS
In this section, we illustrate our approach using two representative text classification tasks: sentiment analysis and topic classification. To perform a solid demonstration on re-reading and skimming, we conduct experiments on three different syntactic levels. We will first introduce the results on the word level before discussing character and sentence level performance.
General Experimental Settings: In our experiments, we use the IMDB and Yelp dataset for sentiment analysis, and the AG news and DBpedia for topic classification. To evaluate each classifier, we use predictive accuracy as the performance metric and average per-data floating point operations (FLOPs) as the computational cost metric. We also take the FLOPs of the policy module into account, even though they are much lower than the classifier. The energy cost for the policy module is about 1 to 2 million FLOPs per sentence, which is much smaller than the total FLOPs needed for the recurrent module and the classifier.
Hyper-parameters: We use the Adam (Kingma & Ba, 2014) optimizer with a learning rate of 0.001 in all experiments. For the recurrent network structure, we use a convolution layer with 128 kernels of size 5 and stack it as input to an LSTM with a hidden size of 128. For πS and πN policy network, we use a three hidden-layer MLP with 128 hidden units per layer. For πC and value network, we use a single-layer MLP with 128 hidden units. For all experiments, the maximal step size K is set to 3.
3.1 RESULTS
We first evaluate our method on the IMDB movie dataset (Maas et al., 2011). We randomly split it into 20,000 training, 5,000 validation and 25,000 test samples. The average length in the dataset is 240 words. We adopt the same preprocessing method as Yu et al. (2017), either padding or
truncating each sentence to 400 words. We use a chunk-size of 20 words, i.e., at each step, the classifier reads 20 words. When the action is rereading or skipping, it rereads or skips several chunks of 20 words.
To demonstrate the effectiveness of re-reading and skimming, we design three baseline models: (1) The early stopping model, which has only a stopping module to decide when to terminate reading the paragraph, the classifier and policy module are jointly trained on the entire training corpus; (2) The partial reading model, which is a classifier with same architecture trained on the truncated sentences decided by the stopping model (same as the one in the early stopping model. Thus, although the partial reading model has the same computational budget as the early stopping model, the prediction performance may differ; (3) The whole reading model, which tries to use the whole corpus as training data.
Figure 2 shows our comparison on the IMDB dataset, where the blue line indicates our proposed model while green and red one denote early-stopping model and partial reading model, respectively. The x-axis denotes the FLOP count (in millions) and the y-axis indicates the accuracy. Here the FLOP count is determined by the choice of the hyper-parameter α. As α increases, we obtain a curve indicating the trade-off between accuracy and energy cost. From this plot, we observe that both blue line and green line outperform the red line significantly. In addition, rereading and skipping further improve the performance of the model with only the early stopping mechanism. This observation implies that training the classifier jointly with the policy model improves both computational efficiency and accuracy.
Besides the word-level evaluation, we also conduct experiments on a smaller scale syntactic unit: character-level. In detail, we perform topic classification on two large-scale text datasets (Zhang et al., 2015): the AG news dataset contains four topics, 120,000 training news, 10,000 validation news, 7,600 testing news. The DBpedia dataset contains 14 topics, 560,000 training entities, 10,000 validation entities and 70,000 testing entities. The results are summarized in Figure 3. We observe that our proposed model outperforms the partial reading baseline by a significant margin.
Furthermore, we evaluate our proposed model on a larger syntactic level: sentence level. We use Yelp review sentiment analysis dataset for this experiment. The Yelp dataset includes 500,000 training reviews, 20,000 validation reviews and 40,000 testing reviews. To evaluate on the larger semantic unit, we treat each sentence as a token, which will be read sequentially by the RNN encoder. The performance is provided in Figure 4. We observe that our proposed model achieves superior performance while being significantly faster.
We summarize the obtained performance improvements in Table 1. On four different datasets and for three different syntactic levels we observe significant speedups when using the proposed techniques for skimming, rereading and early stopping, while maintaining the accuracy. A partial reading model
which has the same computational cost achieves results that are less accurate, which illustrates the benefits of a flexible model. In addition, our model achieves about 0.5-1 percent accuracy improvement compared to the full-reading model.
Finally, we compare our model to a recently published baseline (Yu et al., 2017), which only implements the skipping actions with k ∈ {1, 2, ...,K} but without rereading, and simply do early stopping when k = 0. We implemented their algorithm for a fair comparison. Results in Table 2 show that our model is much more efficient than their LSTM-skip model at the same-level of accuracy, which is marginally better than full reading baseline. These results demonstrated that our proposed rereading and skimming mechanisms are effective on a variety of text classification tasks including sentiment analysis and topic classification. Also it is effective on different level of semantics: character-level, word-level or even sentence-level. With the help of our mechanisms, we could achieve both higher accuracy and faster speed.
3.2 ABLATION ANALYSIS
In this section, we conducted an ablation study to demonstrate the effectiveness of each action mechanism in our method: skimming, rereading and early-stopping. Our experiment was performed
on the word-level IMDB dataset, and the result is presented in Figure 5. The blue curve denotes the performance of the model with all actions (skimming, rereading and early-stopping) enabled. The green one denotes the performance of the model with only the early-stopping actions. Between these two curves, the red curve represents a model with rereading and early-stopping action, and the yellow line represents a model with skimming and early-stopping actions. Note that the performance of the green curve is the worst, indicating that rereading and skimming mechanisms are necessary. Furthermore, the blue curve is better than all other ones, indicating that combining skimming and rereading together can further improve the performance of the policy model.
3.3 CASE STUDIES
To obtain a more detailed understanding of our model, we first show the actions taken by our model on a sentiment analysis example (Figure 6), on which the LSTM full-reading model failed to give the right classification. We show the degree of positiveness given by LSTM model encoded in color, from green representing positiveness to brown representing negativeness.
The paragraph starts with a sentence with strong positiveness of a dinner, then followed by a few sentences on some confusing description on the dinner. Many trivial or even negative words show up in the explanation. As a result, the output of the full-reading model gradually changes from positive to negative and finally results in a negative signal. Importantly, after our model reads the first two sentences, the policy module decides that it is confident enough to make a decision yielding the correct answer.
Next we illustrate how the rereading and skimming actions are useful to identify important information in the text. As shown in Figure 7, our model first reads a key word “stake” and is confident that the document is about money. Then it skims a few irrelevant tokens to read about “buying stake in Biotechnology” the following two tokens. The phrase “5 percent stake” showed up twice. Our model consider it to be important so it re-reads this token. At this time, the model basically knows this text is about business with a reasonable confidence. Then it skips to read about “collaboration deal” and stops to make a confident prediction.
4 RELATED WORK
The idea of improving time efficiency with adaptive computation has been studied extensively and throughout the years Weiss & Taskar (2013). For example, the adaptive computation time algorithm Graves (2016) on recurrent neural networks proposed to utilize early stopping action to save
computational cost. Spatially adaptive computation time Figurnov et al. (2016) was proposed for image classification and object detection tasks. Compared to their work, our model is powerful by utilizing the combinatorial complexity of actions.
Attention mechanism applied to text data are related. ReasonNet Shen et al. (2017) trains a policy module to determine whether to stop before accessing the full text on question-answering tasks. Similarly, the model of Dulac-Arnold et al. (2011) performs early-stopping on text classification tasks. Comparing with these related work, our proposed model’s skimming and rereading mechanisms are innovative. In addition, Choi et al. (2016) and Lei et al. (2016) propose to select the relevant sentences which are critical for question answering and sentiment analysis, respectively. Their methods utilize prediction accuracy as the reward signal to train the policy module. However, in our work, the policy module is trained considering both accuracy and computational cost explicitly.
Other ways to reduce the inference computational cost for new examples have been considered. Bengio et al. (2015) proposes a scheme to selectively activate parts of the network. Bolukbasi et al. (2017) presents two schemes to adaptively utilize the network during inference: Given each data point, they first select a network and then select some components of that network.
One closely related work is Yu et al. (2017). The authors train their policy network end-to-end with reinforcement learning. In contrast to their work, our model implemented human-like mechanism rereading and separated early-stopping mechanism, thus leading to further improved efficiency and accuracy. Furthermore, we hardly rely on many hyper-parameters and only use a simple reward structure. Finally, we get an advanced performance with better reward design which incorporates the negative energy cost explicitly and implement a value network to reduce the variance.
5 CONCLUSIONS
We develop an end-to-end trainable approach for skimming, rereading and early stopping applicable to classification tasks. By mimicking human fast reading, we introduce a policy module to decide what token to read next (e.g., rereading the current token, reading the next one, or skipping the next few tokens) or whether the model should stop reading and form a decision. To encourage fast and accurate reading, we incorporate both classification accuracy and the computational cost as a reward function to score classification or other actions made by the agent during reading. An endto-end training algorithm based on the policy gradient method backpropagates the reward signal into both the policy module (also including the classification policy) and the recurrent encoder. We demonstrate the efficacy of the proposed approach on four different datasets and demonstrate improvements for both accuracy and computational performance.
6 APPENDIX
6.A COMPARISON OF DIFFERENT CHUNK SIZE
To illustrate that our model’s performance is robust to the choice of chunk size, we investigate the model performance with a variety of chunk sizes on the IMDB dataset. The result is shown in Figure 8. Here the red curve denotes the performance of the partial reading baseline, and the other three curves denote the performance of our full-action model with three chunk sizes 8, 20, 40, respectively. It is clear that our model outperformes the baselines significantly with different choices of chunk size.
In addition, we found that if the chunk size is too small, there are more decision steps inside each sentence, resulting the policy optimization more difficult. For instance, the performance of the chunk size 8 seems worse than two larger chunk sizes. We believe this issue may be overcome by applying more advanced policy optimization algorithms such as proximal policy optimization (Schulman et al., 2017). On the other hand, if the chunk size is too large, there were fewer decision steps, making the model not flexible enough. Among all three choices, we found that setting chunk size 20 leads to best practice in our experiments. | 1. What is the main contribution of the paper in text classification?
2. How does the proposed model differ from previous works in terms of its mechanism?
3. Can you explain how the policy module works and what are its rewards based on?
4. What are the advantages of training the entire architecture end-to-end?
5. How does the approach perform compared to a baseline that reads the entire text?
6. Are there any limitations or potential drawbacks of the proposed method?
7. Can this approach be applied to other tasks beyond text classification? | Review | Review
The paper present a model for fast reading for text classification with mechanisms that allow the model to reread, skip words, or classify early before reading the entire review. The model contains a policy module that makes decisions on whether to reread, skim or stop, which is rewarded for both classification accuracy and computation cost. The entire architecture is trained end-to-end with backpropagation, Monte Carlo rollouts and a baseline for variance reduction.
The results show that the architecture is able to classify accurately on all syntactic levels, faster than a baseline that reads the entire text. The approach is simple and seems to work well and could be applied to other tasks where inference time is important. |
ICLR | Title
Learning Neural PDE Solvers with Convergence Guarantees
Abstract
Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers.
1 INTRODUCTION
Partial differential equations (PDEs) are ubiquitous tools for modeling physical phenomena, such as heat, electrostatics, and quantum mechanics. Traditionally, PDEs are solved with hand-crafted approaches that iteratively update and improve a candidate solution until convergence. Decades of research and engineering went into designing update rules with fast convergence properties.
The performance of existing solvers varies greatly across application domains, with no method uniformly dominating the others. Generic solvers are typically effective, but could be far from optimal for specific domains. In addition, high performing update rules could be too complex to design by hand. In recent years, we have seen that for many classical problems, complex updates learned from data or experience can out-perform hand-crafted ones. For example, for Markov chain Monte Carlo, learned proposal distributions lead to orders of magnitude speedups compared to handdesigned ones (Song et al., 2017; Levy et al., 2017). Other domains that benefited significantly include learned optimizers (Andrychowicz et al., 2016) and learned data structures (Kraska et al., 2018). Our goal is to bring similar benefits to PDE solvers.
Hand-designed solvers are relatively simple to analyze and are guaranteed to be correct in a large class of problems. The main challenge is how to provide the same guarantees with a potentially much more complex learned solver. To achieve this goal, we build our learned iterator on top of an existing standard iterative solver to inherit its desirable properties. The iterative solver updates the solution at each step, and we learn a parameterized function to modify this update. This function class is chosen so that for any choice of parameters, the fixed point of the original iterator is preserved. This guarantees correctness, and training can be performed to enhance convergence speed. Because of this design, we only train on a single problem instance; our model correctly generalizes to a variety of different geometries and boundary conditions with no observable loss of performance. As a result, our approach provides: (i) theoretical guarantees of convergence to the correct stationary solution, (ii) faster convergence than existing solvers, and (iii) generalizes to geometries and boundary conditions very different from the ones seen at training time. This is in stark contrast with
existing deep learning approaches for PDE solving (Tang et al., 2017; Farimani et al., 2017) that are limited to specific geometries and boundary conditions, and offer no guarantee of correctness.
Our approach applies to any PDE with existing linear iterative solvers. As an example application, we solve the 2D Poisson equations. Our method achieves a 2-3× speedup on number of multiplyadd operations when compared to standard iterative solvers, even on domains that are significantly different from our training set. Moreover, compared with state-of-the-art solvers implemented in FEniCS (Logg et al., 2012), our method achieves faster performance in terms of wall clock CPU time. Our method is also simple as opposed to deeply optimized solvers such as our baseline in FEniCS (minimal residual method + algebraic multigrid preconditioner). Finally, since we utilize standard convolutional networks which can be easily parallelized on GPU, our approach leads to an additional 30× speedup when run on GPU.
2 BACKGROUND
In this section, we give a brief introduction of linear PDEs and iterative solvers. We refer readers to LeVeque (2007) for a thorough review.
2.1 LINEAR PDES
Linear PDE solvers find functions that satisfy a (possibly infinite) set of linear differential equations. More formally, let F = {u : Rk → R} be the space of candidate functions, and A : F → F be a linear operator; the goal is to find a function u ∈ F that satisfies a linear equation Au = f, where f is another function Rk → R given by our problem. Many PDEs fall into this framework. For example, heat diffusion satisfies ∇2u = f (Poisson equation), where ∇2 = ∂ 2
∂x21 + · · · + ∂
2
∂x2k
is the linear Laplace operator; u maps spatial coordinates (e.g. in R3) into its temperature, and f maps spatial coordinates into the heat in/out flow. Solving this equation lets us know the stationary temperature given specified heat in/out flow.
Usually the equation Au = f does not uniquely determine u. For example, u = constant for any constant is a solution to the equation ∇2u = 0. To ensure a unique solution we provide additional equations, called “boundary conditions”. Several boundary conditions arise very naturally in physical problems. A very common one is the Dirichlet boundary condition, where we pick some subset G ⊂ Rk and fix the values of the function on G to some fixed value b,
u(x) = b(x), for all x ∈ G where the function b is usually clear from the underlying physical problem. As in previous literature, we refer to G as the geometry of the problem, and b as the boundary value. We refer to the pair (G, b) as the boundary condition. In this paper, we only consider linear PDEs and boundary conditions that have unique solutions.
2.2 FINITE DIFFERENCE METHOD
Most real-world PDEs do not admit an analytic solution and must be solved numerically. The first step is to discretize the solution space F from Rk → R into Dk → R, where D is a discrete subset of R. When the space is compact, it is discretized into an n×n×n · · · (k many) uniform Cartesian grid with mesh width h. Any function in F is approximated by its value on the nk grid points. We denote the discretized function as a vector u in Rnk . In this paper, we focus on 2D problems (k = 2), but the strategy applies to any dimension.
We discretize all three terms in the equation Au = f and boundary condition (G, b). The PDE solution u is discretized such that ui,j = u(xi, yj) corresponds to the value of u at grid point (xi, yj). We can similarly discretize fand b. In linear PDEs, the linear operator A is a linear combination of partial derivative operators. For example, for the Poisson equation A = ∇2 = ∑ i ∂2
∂x2i .
Therefore we can first discretize each partial derivative, then linearly combine the discretized partial derivatives to obtain a discretized A. Finite difference is a method that approximates partial derivatives in a discretized space, and as mesh width h → 0, the approximation approaches the true derivative. For example, ∂ 2
∂x2 u can
be discretized in 2D as ∂ 2
∂x2 u ≈ 1 h2 (ui−1,j − 2ui,j + ui+1,j), the Laplace operator in 2D can be
correspondingly approximated as:
∇2u = ∂ 2u ∂x2 + ∂2u ∂y2 ≈ 1 h2 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1 − 4ui,j) (1)
After discretization, we can rewrite Au = fas a linear matrix equation
Au = f (2)
where u, f ∈ Rn2 , and A is a matrix in Rn2×n2 (these are n2 dimensional because we focus on 2D problems). In many PDEs such as the Poisson and Helmholtz equation, A is sparse, banded, and symmetric.
2.3 BOUNDARY CONDITION
We also need to include the boundary condition u(x) = b(x) for all x ∈ G. If a discretized point (xi, yj) belongs to G, we need to fix the value of ui,j to bi,j . To achieve this, we first define e ∈ {0, 1}n2 to be a vector of 0’s and 1’s, in which 0 indicates that the corresponding point belongs to G. Then, we define a “reset” matrix G = diag(e), a diagonal matrix Rn2 → Rn2 such that
(Gu)i,j = { ui,j (xi, yj) 6∈ G 0 (xi, yj) ∈ G
(3)
Intuitively G ”masks” every point in G to 0. Similarly, I − G can mask every point not in G to 0. Note that the boundary values are fixed and do not need to satisfy Au = f . Thus, the solution u to the PDE under geometry G should satisfy:
G(Au) = Gf
(I −G)u = (I −G)b (4)
The first equation ensures that the interior points (points not in G) satisfy Au = f , and the second ensures that the boundary condition is satisfied.
To summarize, (A,G,f, b, n) is our PDE problem, and we first discretize the problem on an n× n grid to obtain (A,G, f, b, n). Our objective is to obtain a solution u that satisfies Eq. (4), i.e. Au = f for the interior points and boundary condition ui,j = bi,j , ∀(xi, yj) ∈ G.
2.4 ITERATIVE SOLVERS
A linear iterative solver is defined as a function that inputs the current proposed solution u ∈ Rn2 and outputs an updated solution u′. Formally it is a function Ψ : Rn2 → Rn2 that can be expressed as u′ = Ψ(u) = Tu+ c (5) where T is a constant update matrix and c is a constant vector. For each iterator Ψ there may be special vectors u∗ ∈ Rn2 that satisfy u∗ = Ψ(u∗). These vectors are called fixed points.
The iterative solver Ψ should map any initial u0 ∈ Rn2 to a correct solution of the PDE problem. This is formalized in the following theorem. Definition 1 (Valid Iterator). An iterator Ψ is valid w.r.t. a PDE problem (A,G, f, b, n) if it satisfies:
a) Convergence: There is a unique fixed point u∗ such that Ψ converges to u∗ from any initialization: ∀u0 ∈ Rn2 , limk→∞Ψk(u0) = u∗.
b) Fixed Point: The fixed point u∗ is the solution to the linear systemAu = f under boundary condition (G, b).
Convergence: Condition (a) in Definition 1 is satisfied if the matrix T is convergent, i.e. T k → 0 as k → ∞. It has been proven that T is convergent if and only if the spectral radius ρ(T ) < 1 (Olver, 2008):
Theorem 1. (Olver, 2008, Prop 7.25) For a linear iterator Ψ(u) = Tu + c, Ψ converges to a unique stable fixed point from any initialization if and only if the spectral radius ρ(T ) < 1.
Proof. See Appendix A.
It is important to note that Condition (a) only depends on T and not the constant c.
Fixed Point: Condition (b) in Definition 1 contains two requirements: satisfy Au = f , and the boundary condition (G, b). To satisfyAu = f a standard approach is to design Ψ by matrix splitting: split the matrix A into A = M − N ; rewrite Au = f as Mu = Nu + f (LeVeque, 2007). This naturally suggests the iterative update
u′ = M−1Nu+M−1f (6)
Because Eq. (6) is a rewrite of Au = f , stationary points u∗ of Eq. (6) satisfy Au∗ = f . Clearly, the choices of M and N are arbitrary but crucial. From Theorem 1, we must choose M such that the update converges. In addition, M−1 must easy to compute (e.g., diagonal).
Finally we also need to satisfy the boundary condition (I − G)u = (I − G)b in Eq.4. After each update in Eq. (6), the boundary condition could be violated. We use the “reset” operator defined in Eq. (3) to “reset” the values of ui,j to bi,j by Gu+ (I −G)b. The final update rule becomes
u′ = G(M−1Nu+M−1f) + (I −G)b (7)
Despite the added complexity, it is still a linear update rule in the form of u′ = Tu + c in Eq. (5): we have T = GM−1N and c = GM−1f + (1−G)b. As long as M is a full rank diagonal matrix, fixed points of this equation satisfies Eq. (4). In other words, such a fixed point is a solution of the PDE problem (A,G, f, b, n).
Proposition 1. If M is a full rank diagonal matrix, and u∗ ∈ Rn2×n2 satisfies Eq. (7), then u∗ satisfies Eq. (4).
2.4.1 JACOBI METHOD
A simple but effective way to choose M is the Jacobi method, which sets M = I (a full rank diagonal matrix, as required by Proposition 1). For Poisson equations, this update rule has the following form,
ûi,j = 1
4 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1) +
h2
4 fi,j (8)
u′ = Gû+ (1−G)b (9)
For Poisson equations and any geometry G, the update matrix T = G(I − A) has spectral radius ρ(T ) < 1 (see Appendix B). In addition, by Proposition 1 any fixed point of the update rule Eq.(8,9) must satisfy Eq. (4). Both convergence and fixed point conditions from Definition 1 are satisfied: Jacobi iterator Eq.(8,9) is valid for any Poisson PDE problem.
In addition, each step of the Jacobi update can be implemented as a neural network layer, i.e., Eq.
(8) can be efficiently implemented by convolving u with kernel
( 0 1/4 0
1/4 0 1/4 0 1/4 0
) and adding
h2f/4. The “reset” step in Eq. (9) can also be implemented as multiplying u with G and adding the boundary values (1−G)b.
2.4.2 MULTIGRID METHOD
The Jacobi method has very slow convergence rate (LeVeque, 2007). This is evident from the update rule, where the value at each grid point is only influenced by its immediate neighbors. To propagate information from one grid point to another, we need as many iterations as their distance on the grid. The key insight of the Multigrid method is to perform Jacobi updates on a downsampled (coarser) grid and then upsample the results. A common structure is the V-cycle (Briggs et al., 2000). In each
V-cycle, there are k downsampling layers followed by k upsampling layers, and multiple Jacobi updates are performed at each resolution. The downsampling and upsampling operations are also called restriction and prolongation, and are often implemented using weighted restriction and linear interpolation respectively. The advantage of the multigrid method is clear: on a downsampled grid (by a factor of 2) with mesh width 2h, information propagation is twice as fast, and each iteration requires only 1/4 operations compared to the original grid with mesh width h.
3 LEARNING FAST AND PROVABLY CORRECT ITERATIVE PDE SOLVERS
A PDE problem consists of five components (A,G,f, b, n). One is often interested in solving the same PDE class A under varying f, discretization n, and boundary conditions (G, b). For example, solving the Poisson equation under different boundary conditions (e.g., corresponding to different mechanical systems governed by the same physics). In this paper, we fix A but vary G,f, b, n, and learn an iterator that solves a class of PDE problems governed by the same A. For a discretized PDE problem (A,G, f, b, n) and given a standard (hand designed) iterative solver Ψ, our goal is to improve upon Ψ and learn a solver Φ that has (1) correct fixed point and (2) fast convergence (on average) on the class of problems of interest. We will proceed to parameterize a family of Φ that satisfies (1) by design, and achieve (2) by optimization.
In practice, we can only train Φ on a small number of problems (A, fi, Gi, bi, ni). To be useful, Φ must deliver good performance on every choice of G, f, b, and different grid sizes n. We show, theoretically and empirically, that our iterator family has good generalization properties: even if we train on a single problem (A,G, f, b, n), the iterator performs well on very different choices of G, f, b, and grid size n. For example, we train our iterator on a 64× 64 square domain, and test on a 256× 256 L-shaped domain (see Figure 1).
3.1 FORMULATION
For a fixed PDE problem class A, let Ψ be a standard linear iterative solver known to be valid. We will use more formal notation Ψ(u;G, f, b, n) as Ψ is a function of u, but also depends onG, f, b, n. Our assumption is that for any choice of G, f, b, n (but fixed PDE classA), Ψ(u;G, f, b, n) is valid. We previously showed that Jacobi iterator Eq.(8,9) have this property for the Poisson PDE class.
We design our new family of iterators ΦH : Rn 2 → Rn2 as
w = Ψ(u;G, f, b, n)− u ΦH(u;G, f, b, n) = Ψ(u;G, f, b, n) +GHw
(10)
where H is a learned linear operator (it satisfies H0 = 0). The term GHw can be interpreted as a correction term to Ψ(u;G, f, b, n). When there is no confusion, we neglect the dependence on G, f, b, n and denote as Ψ(u) and ΦH(u).
ΦH should have similar computation complexity as Ψ. Therefore, we chooseH to be a convolutional operator, which can be parameterized by a deep linear convolutional network. We will discuss the parameterization of H in detail in Section 3.4; we first prove some parameterization independent properties.
The correct PDE solution is a fixed point of ΦH by the following lemma: Lemma 1. For any PDE problem (A,G, f, b, n) and choice of H , if u∗ is a fixed point of Ψ, it is a fixed point of ΦH in Eq. (10).
Proof. Based on the iterative rule in Eq. (10), if u∗ satisfies Ψ(u∗) = u∗ then w = Ψ(u∗)−u∗ = 0. Therefore, ΦH(u∗) = Ψ(u∗) +GH0 = u∗.
Moreover, the space of ΦH subsumes the standard solver Ψ. If H = 0, then ΦH = Ψ. Furthermore, denote Ψ(u) = Tu+ c, then if H = T , then since GT = T (see Eq. (7)),
ΦH(u) = Ψ(u) +GT (Ψ(u)− u) = TΨ(u) + c = Ψ2(u) (11) which is equal to two iterations of Ψ. Computing Ψ requires one convolution T , while computing ΦH requires two convolutions: T and H . Therefore, if we choose H = T , then ΦH computes two iterations of Ψ with two convolutions: it is at least as efficient as the standard solver Ψ.
3.2 TRAINING AND GENERALIZATION
We train our iterator ΦH(u;G, f, b, n) to converge quickly to the ground truth solution on a set D = {(Gl, fl, bl, nl)}Ml=1 of problem instances. For each instance, the ground truth solution u∗ is obtained from the existing solver Ψ. The learning objective is then
min H ∑ (Gl,fl,bl,nl)∈D Eu0∼N (0,1)‖ΦkH(u0;Gl, fl, bl, nl)− u∗‖22 (12)
Intuitively, we look for a matrix H such that the corresponding iterator ΦH will get us as close as possible to the solution in k steps, starting from a random initialization u0 sampled from a white Gaussian. k in our experiments is uniformly chosen from [1, 20], similar to the procedure in (Song et al., 2017). Smaller k is easier to learn with less steps to back-propagate through, while larger k better approximates our test-time setting: we care about the final approximation accuracy after a given number of iteration steps. Combining smaller and larger k performs best in practice.
We show in the following theorem that there is a convex open set of H that the learning algorithm can explore. To simplify the statement of the theorem, for any linear iterator Φ(u) = Tu + c we will refer to the spectral radius (norm) of Φ as the spectral radius (norm) of T . Theorem 2. For fixed G, f, b, n, the spectral norm of ΦH(u;G, f, b, n) is a convex function of H , and the set of H such that the spectral norm of ΦH(u;G, f, b, n) < 1 is a convex open set.
Proof. See Appendix A.
Therefore, to find an iterator with small spectral norm, the learning algorithm only has to explore a convex open set. Note that Theorem 2 holds for spectral norm, whereas validity requires small spectral radius in Theorem 1. Nonetheless, several important PDE problems (Poisson, Helmholtz, etc) are symmetric, so it is natural to use a symmetric iterator, which means that spectral norm is equal to spectral radius. In our experiments, we do not explicitly enforce symmetry, but we observe that the optimization finds symmetric iterators automatically.
For training, we use a single grid size n, a single geometryG, f = 0, and a restricted set of boundary conditions b. The geometry we use is a square domain shown in Figure 1a. Although we train on a single domain, the model has surprising generalization properties, which we show in the following: Proposition 2. For fixed A,G, n and fixed H , if for some f0, b0, ΦH(u;G, f0, b0, n) is valid for the PDE problem (A,G, f0, b0, n), then for all f and b, the iterator ΦH(u;G, f, b, n) is valid for the PDE problem (A,G, f, b, n).
Proof. See Appendix A.
The proposition states that we freely generalize to different f and b. There is no guarantee that we can generalize to different G and n. Generalization to different G and n has to be empirically verified: in our experiments, our learned iterator converges to the correct solution for a variety of grid sizes n and geometries G, even though it was only trained on one grid size and geometry.
Even when generalization fails, there is no risk of obtaining incorrect results. The iterator will simply fail to converge. This is because according to Lemma 1, fixed points of our new iterator is the same as the fixed point of hand designed iterator Ψ. Therefore if our iterator is convergent, it is valid.
3.3 INTERPRETATION OF H
What is H trying to approximate? In this section we show that we are training our linear function GH to approximate T (I − T )−1: if it were able to approximate T (I − T )−1 perfectly, our iterator ΦH will converge to the correct solution in a single iteration.
Let the original update rule be Ψ(u) = Tu + c, and the unknown ground truth solution be u∗ satisfying u∗ = Tu∗ + c. Let r = u∗ − u be the current error, and e = u∗ −Ψ(u) be the new error after applying one step of Ψ. They are related by
e = u∗ −Ψ(u) = u∗ − (Tu+ c) = T (u∗ − u) = Tr (13)
In addition, let w = Ψ(u)− u be the update Ψ makes. This is related to the current error r by
w = Ψ(u)− u = Tu+ c− u+ (u∗ − Tu∗ − c) = T (u− u∗) + (u∗ − u) = (I − T )r (14)
From Eq. (10) we can observe that the linear operator GH takes as input Ψ’s update w, and tries to approximate the error e: GHw ≈ e. If the approximation were perfect: GHw = e, the iterator ΦH would converge in a single iteration. Therefore, we are trying to find some linear operator R, such that Rw = e. In fact, if we combine Eq. (13) and Eq. (14), we can observe that T (I − T )−1 is (uniquely) the linear operator we are looking for
T (I − T )−1w = e (15)
where (I − T )−1 exists because ρ(T ) < 1, so all eigenvalues of I − T must be strictly positive. Therefore, we would like our linear function GH to approximate T (I − T )−1. Note that (I − T )−1 is a dense matrix in general, meaning that it is impossible to exactly achieve GH = T (I−T )−1 with a convolutional operatorH . However, the betterGH is able to approximate T (I − T )−1, the faster our iterator converges to the solution u∗.
3.4 LINEAR DEEP NETWORKS
In our iterator design, H is a linear function parameterized by a linear deep network without nonlinearity or bias terms. Even though our objective in Eq. (12) is a non-linear function of the parameters of the deep network, this is not an issue in practice. In particular, Arora et al. (2018) observes that when modeling linear functions, deep networks can be faster to optimize with gradient descent compared to linear ones, despite non-convexity.
Even though a linear deep network can only represent a linear function, it has several advantages. On an n×n grid, each convolution layer only requires O(n2) computation and have a constant number of parameters, while a general linear function requires O(n4) computation and have O(n4) parameters. Stacking d convolution layers allows us to parameterize complex linear functions with large receptive fields, while only requiring O(dn2) computation and O(d) parameters. We experiment on two types of linear deep networks:
Conv model. We model H as a network with 3 × 3 convolutional layers without non-linearity or bias. We will refer to a model with k layers as “Convk”, e.g. Conv3 has 3 convolutional layers.
U-Net model. The Conv models suffer from the same problem as Jacobi: the receptive field grows only by 1 for each additional layer. To resolve this problem, we design the deep network counterpart of the Multigrid method. Instead of manually designing the sub-sampling / super-sampling functions, we use a U-Net architecture (Ronneberger et al., 2015) to learn them from data. Because each layer reduces the grid size by half, and the i-th layer of the U-Net only operates on (2−in)-sized grids, the total computation is only increased by a factor of
1 + 1/4 + 1/16 + · · · < 4/3
compared to a two-layer convolution. The minimal overhead provides a very large improvement of convergence speed in our experiments. We will refer to Multigrid and U-Net models with k sub-sampling layers as Multigridk and U-Netk, e.g. U-Net2 is a model with 2 sub-sampling layers.
4 EXPERIMENTS
4.1 SETTING
We evaluate our method on the 2D Poisson equation with Dirichlet boundary conditions, ∇2u = f. There exist several iterative solvers for the Poisson equation, including Jacobi, Gauss-Seidel, conjugate-gradient, and multigrid methods. We select the Jacobi method as our standard solver Ψ.
To reemphasize, our goal is to train a model on simple domains where the ground truth solutions can be easily obtained, and then evaluate its performance on different geometries and boundary conditions. Therefore, for training, we select the simplest Laplace equation, ∇2u = 0, on a square domain with boundary conditions such that each side is a random fixed value. Figure 1a shows an
example of our training domain and its ground truth solution. This setting is also used in Farimani et al. (2017) and Sharma et al. (2018).
For testing, we use larger grid sizes than training. For example, we test on 256×256 grid for a model trained on 64 × 64 grids. Moreover, we designed challenging geometries to test the generalization of our models. We test generalization on 4 different settings: (i) same geometry but larger grid, (ii) L-shape geometry, (iii) Cylinders geometry, and (iv) Poisson equation in same geometry, but f 6= 0. The two geometries are designed because the models were trained on square domains and have never seen sharp or curved boundaries. Examples of the 4 settings are shown in Figure 1.
4.2 EVALUATION
As discussed in Section 2.4, the convergence rate of any linear iterator can be determined from the spectral radius ρ(T ), which provides guarantees on convergence and convergence rate. However, a fair comparison should also consider the computation cost of H . Thus, we evaluate the convergence rate by calculating the computation cost required for the error to drop below a certain threshold.
On GPU, the Jacobi iterator and our model can both be efficiently implemented as convolutional layers. Thus, we measure the computation cost by the number of convolutional layers. On CPU, each Jacobi iteration u′i,j = 1 4 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1) has 4 multiply-add operations, while a 3 × 3 convolutional kernel requires 9 operations, so we measure the computation cost by the number of multiply-add operations. This metric is biased in favor of Jacobi because there is little practical reason to implement convolutions on CPU. Nonetheless, we report both metrics in our experiments.
4.3 CONV MODEL
Table 1 shows results of the Conv model. The model is trained on a 16 × 16 square domain, and tested on 64 × 64. For all settings, our models converge to the correct solution, and require less computation than Jacobi. The best model, Conv3, is ∼ 5× faster than Jacobi in terms of layers, and ∼ 2.5× faster in terms of multiply-add operations. As discussed in Section 3.2, if our iterator converges for a geometry, then it is guaranteed to converge to the correct solution for any f and boundary values b. The experiment results show that our model not only converges but also converges faster than the standard solver, even though it is only trained on a smaller square domain.
4.4 U-NET MODEL
For the U-Net models, we compare them against Multigrid models with the same number of subsampling and smoothing layers. Therefore, our models have the same number of convolutional layers, and roughly 9/4 times the number of operations compared to Multigrid. The model is trained on a 64× 64 square domain, and tested on 256× 256. The bottom part of Table 1 shows the results of the U-Net model. Similar to the results of Conv models, our models outperforms Multigrid in all settings. Note that U-Net2 has lower computation
cost compared with Multigrid2 than U-Net3 compared to Multigrid 3. This is because Multigrid2 is a relatively worse baseline. U-Net3 still converges faster than U-Net2.
4.5 COMPARISON WITH FENICS
The FEniCS package (Logg et al., 2012) provides a collection of tools with high-level Python and C++ interfaces to solve differential equations. The open-source project is developed and maintained by a global community of scientists and software developers. Its extensive optimization over the years, including the support for parallel computation, has led to its widespread adaption in industry and academia (Alnæs et al., 2015).
We measure the wall clock time of the FEniCS model and our model, run on the same hardware. The FEniCS model is set to be the minimal residual method with algebraic multigrid preconditioner, which we measure to be the fastest compared to other methods such as Jacobi or Incomplete LU factorization preconditioner. We ignore the time it takes to set up geometry and boundary conditions, and only consider the time the solver takes to solve the problem. We set the error threshold to be 1 percent of the initial error. For the square domain, we use a quadrilateral mesh. For the L-shape and cylinder domains, however, we let FEniCS generate the mesh automatically, while ensuring the number of mesh points to be similar.
Figure 2 shows that our model is comparable or faster than FEniCS in wall clock time. These experiments are all done on CPU. Our model efficiently runs on GPU, while the fast but complex methods in FEniCS do not have efficient GPU implementations available. On GPU, we measure an additional 30× speedup (on Tesla K80 GPU, compared with a 64-core CPU).
5 RELATED WORK
Recently, there have been several works on applying deep learning to solve the Poisson equation. However, to the best of our knowledge, previous works used deep networks to directly generate the solution; they have no correctness guarantees and are not generalizable to arbitrary grid sizes and boundary conditions. Most related to our work are (Farimani et al., 2017) and (Sharma et al., 2018), which learn deep networks to output the solution of the 2D Laplace equation (a special case where f = 0). (Farimani et al., 2017) trained a U-Net model that takes in the boundary condition as a 2D image and outputs the solution. The model is trained by L1 loss to the ground truth solution and an adversarial discriminator loss. (Sharma et al., 2018) also trained a U-net model but used a weakly-supervised loss. There are other related works that solved the Poisson equation in concrete physical problems. (Tang et al., 2017) solved for electric potential in 2D/3D space; (Tompson et al., 2017) solved for pressure fields for fluid simulation; (Zhang et al., 2018) solved particle simulation of a PN Junction.
There are other works that solve other types of PDEs. For example, many studies aimed to use deep learning to accelerate and approximate fluid dynamics, governed by the Euler equation or the Navier-Stokes equations (Guo et al., 2016; Yang et al., 2016; Chu & Thuerey, 2017; Kutz, 2017). (Eismann et al., 2018) use Bayesian optimization to design shapes with reduced drag coefficients in laminar fluid flow. Other applications include solving the Schrodinger equation (Mills et al., 2017), turbulence modeling (Singh et al., 2017), and the American options and Black Scholes PDE (Sirignano & Spiliopoulos, 2018). A lot of these PDEs are nonlinear and may not have a standard linear iterative solver, which is a limitation to our current method since our model must be built on top of an existing linear solver to ensure correctness. We consider the extension to different PDEs as future work.
6 CONCLUSION
We presented a method to learn an iterative solver for PDEs that improves on an existing standard solver. The correct solution is theoretically guaranteed to be the fixed point of our iterator. We show that our model, trained on simple domains, can generalize to different grid sizes, geometries and boundary conditions. It converges correctly and achieves significant speedups compared to standard solvers, including highly optimized ones implemented in FEniCS.
A PROOFS
Theorem 1. For a linear iterator Ψ(u) = Tu+ c, Ψ converges to a unique stable fixed point from any initialization if and only if the spectral radius ρ(T ) < 1.
Proof. Suppose ρ(T ) < 1, then (I−T )−1 must exist because all eigenvalues of I−T must be strictly positive. Let u∗ = (I −T )−1c; this u∗ is a stationary point of the iterator Ψ, i.e. u∗ = Tu∗+ c. For any initialization u0, let uk = Ψk(u0). The error ek = u∗ − uk satisfies
Tek = (Tu∗ + c)− (Tuk + c) = u∗ − uk+1 = ek+1 ⇒ ek = T ke0 (16)
Since ρ(T ) < 1, we know T k → 0 as k → ∞ (LeVeque, 2007), which means the error ek → 0. Therefore, Ψ converges to u∗ from any u0.
Now suppose ρ(T ) ≥ 1. Let λ1 be the largest absolute eigenvalue where ρ(T ) = |λ1| ≥ 1, and v1 be its corresponding eigenvector. We select initialization u0 = u∗ + v1, then e0 = v1. Because |λ1| ≥ 1, we have |λk1 | ≥ 1, then T ke0 = λk1v1 6→k→∞0 However we know that under a different initialization û0 = u∗, we have ê0 = 0, so T kê0 = 0. Therefore the iteration cannot converge to the same fixed point from different initializations u0 and û0.
Proposition 1 If M is a full rank diagonal matrix, and u∗ ∈ Rn2×n2 satisfies Eq. (7), then u∗ satisfies Eq. (4).
Proof of Proposition 1. Let u∗ be a fixed point of Eq. (7) then
Gu∗ + (I −G)u∗ = G(M−1Nu∗ +M−1f) + (I −G)b
This is equivalent to (I −G)u∗ = (I −G)b
G(u∗ −M−1Nu∗ −M−1f) = 0 (17)
The latter equation is equivalent to GM−1(Au∗ − f) = 0. If M is a full rank diagonal matrix, this implies G(Au∗ − f) = 0, which is GAu∗ = Gf . Therefore, u∗ satisfies Eq.(4).
Theorem 2. For fixed G, f, b, n, the spectral norm of ΦH(u;G, f, b, n) is a convex function of H , and the set of H such that the spectral norm of ΦH(u;G, f, b, n) < 1 is a convex open set.
Proof. As before, denote Ψ(u) = Tu+ c. Observe that
ΦH(u;G, f, b, n) = Tu+ c+GH(Tu+ c− u) = (T +GHT −GH)u+GHc+ c (18)
The spectral norm ‖·‖2 is convex with respect to its argument, and (T + GHT − GH) is linear in H . Thus, ‖T + GHT − GH‖2 is convex in H as well. Thus, under the condition that ‖T + GHT −GH‖2 < 1, the set of H must be convex because it is a sub-level set of the convex function ‖T +GHT −GH‖2. To prove that it is open, observe that ‖·‖2 is a continuous function, so ‖T + GHT − GH‖2 is a continuous map from H to the spectral radius of ΦH . If we consider the set of H such that ‖T +GHT −GH‖2 < 1, this set is the preimage of (− , 1) for any > 0. As (− , 1) is open, its preimage must be open.
Proposition 2. For fixed A,G, n and fixed H , if for some f0, b0, ΦH(u;G, f0, b0, n) is valid for the PDE problem (A,G, f0, b0, n), then for all f and b, the iterator ΦH(u;G, f, b, n) is valid for the PDE problem (A,G, f, b, n).
Proof. From Theorem 1 and Lemma 1, our iterator is valid if and only if ρ(T +GHT −GH) < 1. The iterator T + GHT − GH only depends on A,G, and is independent of the constant c in Eq. (18). Thus, the validity of the iterator is independent with f and b. Thus, if the iterator is valid for some f0 and b0, then it is valid for any choice of f and b.
B PROOF OF CONVERGENCE OF JACOBI METHOD
In Section 2.4.1, we show that for Poisson equation, the update matrix T = G(I − A). We now formally prove that ρ(G(I −A)) < 1 for any G. For any matrix T , the spectral radius is bounded by the spectral norm: ρ(T ) ≤ ‖T‖2, and the equality holds if T is symmetric. Since (I − A) is a symmetric matrix, ρ(I − A) = ‖I − A‖2. It has been proven that ρ(I − A) < 1 (Frankel, 1950). Moreover, ‖G‖2 = 1. Finally, matrix norms are sub-multiplicative, so
ρ(T ) ≤ ‖G(I −A)‖2 ≤ ‖G‖2‖I −A‖2 < 1 (19)
ρ(T ) < 1 is true for anyG. Thus, the standard Jacobi method is valid for the Poisson equation under any geometry. | 1. What is the focus and contribution of the paper regarding accelerating the finite difference method in solving PDEs?
2. What are the strengths of the proposed approach, particularly in its theoretical analysis and experimental results?
3. Do you have any concerns or suggestions regarding the use of nonlinear deep networks, the limitation to Poisson equations, and the confusion regarding Theorem 3?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This paper develops a method to accelerate the finite difference method in solving PDEs. Basically, the paper proposes a revised framework for fixed point iteration after discretization. The framework introduces a free linear operator --- the choice of the linear operator will influence the convergence rate. The paper uses a deep linear neural network to learn a good operator. Experimental results on Poisson equations show that the learned operator achieves significant speed-ups. The paper also gives theoretical analysis about the range of the valid linear operator (convex open set) and guarantees of the generalization for the learned operator.
This is, in general, a good paper. The work is solid and results promising. Solving PDEs is no doubt an important problem, having broad applications. It will be very meaningful if we can achieve the same accuracy using much less computational power. Here, I have a few questions.
1). Why didn’t you try the nonlinear deep network? Is it merely for computational efficiency? I expect that nonlinear networks might result in even better estimates of H and further reduce the number of fixed-point iterations, despite each operation of H will be more expensive. There might be some trade-off here. But I would like to see some empirical results and discussions.
2). The evaluation is only on Poisson equations, which are known to be easy. Have you tried other PDEs, such as Burger’s equations? I think your method will be more meaningful for those challenging PDEs, because they will require much more fine-grained grids to achieve a satisfactory accuracy and hence much more expensive. It will be great if your method can dramatically improve the efficiency for solving these equations.
3). I am a bit confused about the statement of Th 3 --- the last sentence “H is valid for all parameters f and b if the iterator \psi converges …” I think it should be “for one parameter”.
Miscellaneous:
1) Typo. In eq. (7)
2) Section 3.3, H(w) should be Hw (for consistency) |
ICLR | Title
Learning Neural PDE Solvers with Convergence Guarantees
Abstract
Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers.
1 INTRODUCTION
Partial differential equations (PDEs) are ubiquitous tools for modeling physical phenomena, such as heat, electrostatics, and quantum mechanics. Traditionally, PDEs are solved with hand-crafted approaches that iteratively update and improve a candidate solution until convergence. Decades of research and engineering went into designing update rules with fast convergence properties.
The performance of existing solvers varies greatly across application domains, with no method uniformly dominating the others. Generic solvers are typically effective, but could be far from optimal for specific domains. In addition, high performing update rules could be too complex to design by hand. In recent years, we have seen that for many classical problems, complex updates learned from data or experience can out-perform hand-crafted ones. For example, for Markov chain Monte Carlo, learned proposal distributions lead to orders of magnitude speedups compared to handdesigned ones (Song et al., 2017; Levy et al., 2017). Other domains that benefited significantly include learned optimizers (Andrychowicz et al., 2016) and learned data structures (Kraska et al., 2018). Our goal is to bring similar benefits to PDE solvers.
Hand-designed solvers are relatively simple to analyze and are guaranteed to be correct in a large class of problems. The main challenge is how to provide the same guarantees with a potentially much more complex learned solver. To achieve this goal, we build our learned iterator on top of an existing standard iterative solver to inherit its desirable properties. The iterative solver updates the solution at each step, and we learn a parameterized function to modify this update. This function class is chosen so that for any choice of parameters, the fixed point of the original iterator is preserved. This guarantees correctness, and training can be performed to enhance convergence speed. Because of this design, we only train on a single problem instance; our model correctly generalizes to a variety of different geometries and boundary conditions with no observable loss of performance. As a result, our approach provides: (i) theoretical guarantees of convergence to the correct stationary solution, (ii) faster convergence than existing solvers, and (iii) generalizes to geometries and boundary conditions very different from the ones seen at training time. This is in stark contrast with
existing deep learning approaches for PDE solving (Tang et al., 2017; Farimani et al., 2017) that are limited to specific geometries and boundary conditions, and offer no guarantee of correctness.
Our approach applies to any PDE with existing linear iterative solvers. As an example application, we solve the 2D Poisson equations. Our method achieves a 2-3× speedup on number of multiplyadd operations when compared to standard iterative solvers, even on domains that are significantly different from our training set. Moreover, compared with state-of-the-art solvers implemented in FEniCS (Logg et al., 2012), our method achieves faster performance in terms of wall clock CPU time. Our method is also simple as opposed to deeply optimized solvers such as our baseline in FEniCS (minimal residual method + algebraic multigrid preconditioner). Finally, since we utilize standard convolutional networks which can be easily parallelized on GPU, our approach leads to an additional 30× speedup when run on GPU.
2 BACKGROUND
In this section, we give a brief introduction of linear PDEs and iterative solvers. We refer readers to LeVeque (2007) for a thorough review.
2.1 LINEAR PDES
Linear PDE solvers find functions that satisfy a (possibly infinite) set of linear differential equations. More formally, let F = {u : Rk → R} be the space of candidate functions, and A : F → F be a linear operator; the goal is to find a function u ∈ F that satisfies a linear equation Au = f, where f is another function Rk → R given by our problem. Many PDEs fall into this framework. For example, heat diffusion satisfies ∇2u = f (Poisson equation), where ∇2 = ∂ 2
∂x21 + · · · + ∂
2
∂x2k
is the linear Laplace operator; u maps spatial coordinates (e.g. in R3) into its temperature, and f maps spatial coordinates into the heat in/out flow. Solving this equation lets us know the stationary temperature given specified heat in/out flow.
Usually the equation Au = f does not uniquely determine u. For example, u = constant for any constant is a solution to the equation ∇2u = 0. To ensure a unique solution we provide additional equations, called “boundary conditions”. Several boundary conditions arise very naturally in physical problems. A very common one is the Dirichlet boundary condition, where we pick some subset G ⊂ Rk and fix the values of the function on G to some fixed value b,
u(x) = b(x), for all x ∈ G where the function b is usually clear from the underlying physical problem. As in previous literature, we refer to G as the geometry of the problem, and b as the boundary value. We refer to the pair (G, b) as the boundary condition. In this paper, we only consider linear PDEs and boundary conditions that have unique solutions.
2.2 FINITE DIFFERENCE METHOD
Most real-world PDEs do not admit an analytic solution and must be solved numerically. The first step is to discretize the solution space F from Rk → R into Dk → R, where D is a discrete subset of R. When the space is compact, it is discretized into an n×n×n · · · (k many) uniform Cartesian grid with mesh width h. Any function in F is approximated by its value on the nk grid points. We denote the discretized function as a vector u in Rnk . In this paper, we focus on 2D problems (k = 2), but the strategy applies to any dimension.
We discretize all three terms in the equation Au = f and boundary condition (G, b). The PDE solution u is discretized such that ui,j = u(xi, yj) corresponds to the value of u at grid point (xi, yj). We can similarly discretize fand b. In linear PDEs, the linear operator A is a linear combination of partial derivative operators. For example, for the Poisson equation A = ∇2 = ∑ i ∂2
∂x2i .
Therefore we can first discretize each partial derivative, then linearly combine the discretized partial derivatives to obtain a discretized A. Finite difference is a method that approximates partial derivatives in a discretized space, and as mesh width h → 0, the approximation approaches the true derivative. For example, ∂ 2
∂x2 u can
be discretized in 2D as ∂ 2
∂x2 u ≈ 1 h2 (ui−1,j − 2ui,j + ui+1,j), the Laplace operator in 2D can be
correspondingly approximated as:
∇2u = ∂ 2u ∂x2 + ∂2u ∂y2 ≈ 1 h2 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1 − 4ui,j) (1)
After discretization, we can rewrite Au = fas a linear matrix equation
Au = f (2)
where u, f ∈ Rn2 , and A is a matrix in Rn2×n2 (these are n2 dimensional because we focus on 2D problems). In many PDEs such as the Poisson and Helmholtz equation, A is sparse, banded, and symmetric.
2.3 BOUNDARY CONDITION
We also need to include the boundary condition u(x) = b(x) for all x ∈ G. If a discretized point (xi, yj) belongs to G, we need to fix the value of ui,j to bi,j . To achieve this, we first define e ∈ {0, 1}n2 to be a vector of 0’s and 1’s, in which 0 indicates that the corresponding point belongs to G. Then, we define a “reset” matrix G = diag(e), a diagonal matrix Rn2 → Rn2 such that
(Gu)i,j = { ui,j (xi, yj) 6∈ G 0 (xi, yj) ∈ G
(3)
Intuitively G ”masks” every point in G to 0. Similarly, I − G can mask every point not in G to 0. Note that the boundary values are fixed and do not need to satisfy Au = f . Thus, the solution u to the PDE under geometry G should satisfy:
G(Au) = Gf
(I −G)u = (I −G)b (4)
The first equation ensures that the interior points (points not in G) satisfy Au = f , and the second ensures that the boundary condition is satisfied.
To summarize, (A,G,f, b, n) is our PDE problem, and we first discretize the problem on an n× n grid to obtain (A,G, f, b, n). Our objective is to obtain a solution u that satisfies Eq. (4), i.e. Au = f for the interior points and boundary condition ui,j = bi,j , ∀(xi, yj) ∈ G.
2.4 ITERATIVE SOLVERS
A linear iterative solver is defined as a function that inputs the current proposed solution u ∈ Rn2 and outputs an updated solution u′. Formally it is a function Ψ : Rn2 → Rn2 that can be expressed as u′ = Ψ(u) = Tu+ c (5) where T is a constant update matrix and c is a constant vector. For each iterator Ψ there may be special vectors u∗ ∈ Rn2 that satisfy u∗ = Ψ(u∗). These vectors are called fixed points.
The iterative solver Ψ should map any initial u0 ∈ Rn2 to a correct solution of the PDE problem. This is formalized in the following theorem. Definition 1 (Valid Iterator). An iterator Ψ is valid w.r.t. a PDE problem (A,G, f, b, n) if it satisfies:
a) Convergence: There is a unique fixed point u∗ such that Ψ converges to u∗ from any initialization: ∀u0 ∈ Rn2 , limk→∞Ψk(u0) = u∗.
b) Fixed Point: The fixed point u∗ is the solution to the linear systemAu = f under boundary condition (G, b).
Convergence: Condition (a) in Definition 1 is satisfied if the matrix T is convergent, i.e. T k → 0 as k → ∞. It has been proven that T is convergent if and only if the spectral radius ρ(T ) < 1 (Olver, 2008):
Theorem 1. (Olver, 2008, Prop 7.25) For a linear iterator Ψ(u) = Tu + c, Ψ converges to a unique stable fixed point from any initialization if and only if the spectral radius ρ(T ) < 1.
Proof. See Appendix A.
It is important to note that Condition (a) only depends on T and not the constant c.
Fixed Point: Condition (b) in Definition 1 contains two requirements: satisfy Au = f , and the boundary condition (G, b). To satisfyAu = f a standard approach is to design Ψ by matrix splitting: split the matrix A into A = M − N ; rewrite Au = f as Mu = Nu + f (LeVeque, 2007). This naturally suggests the iterative update
u′ = M−1Nu+M−1f (6)
Because Eq. (6) is a rewrite of Au = f , stationary points u∗ of Eq. (6) satisfy Au∗ = f . Clearly, the choices of M and N are arbitrary but crucial. From Theorem 1, we must choose M such that the update converges. In addition, M−1 must easy to compute (e.g., diagonal).
Finally we also need to satisfy the boundary condition (I − G)u = (I − G)b in Eq.4. After each update in Eq. (6), the boundary condition could be violated. We use the “reset” operator defined in Eq. (3) to “reset” the values of ui,j to bi,j by Gu+ (I −G)b. The final update rule becomes
u′ = G(M−1Nu+M−1f) + (I −G)b (7)
Despite the added complexity, it is still a linear update rule in the form of u′ = Tu + c in Eq. (5): we have T = GM−1N and c = GM−1f + (1−G)b. As long as M is a full rank diagonal matrix, fixed points of this equation satisfies Eq. (4). In other words, such a fixed point is a solution of the PDE problem (A,G, f, b, n).
Proposition 1. If M is a full rank diagonal matrix, and u∗ ∈ Rn2×n2 satisfies Eq. (7), then u∗ satisfies Eq. (4).
2.4.1 JACOBI METHOD
A simple but effective way to choose M is the Jacobi method, which sets M = I (a full rank diagonal matrix, as required by Proposition 1). For Poisson equations, this update rule has the following form,
ûi,j = 1
4 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1) +
h2
4 fi,j (8)
u′ = Gû+ (1−G)b (9)
For Poisson equations and any geometry G, the update matrix T = G(I − A) has spectral radius ρ(T ) < 1 (see Appendix B). In addition, by Proposition 1 any fixed point of the update rule Eq.(8,9) must satisfy Eq. (4). Both convergence and fixed point conditions from Definition 1 are satisfied: Jacobi iterator Eq.(8,9) is valid for any Poisson PDE problem.
In addition, each step of the Jacobi update can be implemented as a neural network layer, i.e., Eq.
(8) can be efficiently implemented by convolving u with kernel
( 0 1/4 0
1/4 0 1/4 0 1/4 0
) and adding
h2f/4. The “reset” step in Eq. (9) can also be implemented as multiplying u with G and adding the boundary values (1−G)b.
2.4.2 MULTIGRID METHOD
The Jacobi method has very slow convergence rate (LeVeque, 2007). This is evident from the update rule, where the value at each grid point is only influenced by its immediate neighbors. To propagate information from one grid point to another, we need as many iterations as their distance on the grid. The key insight of the Multigrid method is to perform Jacobi updates on a downsampled (coarser) grid and then upsample the results. A common structure is the V-cycle (Briggs et al., 2000). In each
V-cycle, there are k downsampling layers followed by k upsampling layers, and multiple Jacobi updates are performed at each resolution. The downsampling and upsampling operations are also called restriction and prolongation, and are often implemented using weighted restriction and linear interpolation respectively. The advantage of the multigrid method is clear: on a downsampled grid (by a factor of 2) with mesh width 2h, information propagation is twice as fast, and each iteration requires only 1/4 operations compared to the original grid with mesh width h.
3 LEARNING FAST AND PROVABLY CORRECT ITERATIVE PDE SOLVERS
A PDE problem consists of five components (A,G,f, b, n). One is often interested in solving the same PDE class A under varying f, discretization n, and boundary conditions (G, b). For example, solving the Poisson equation under different boundary conditions (e.g., corresponding to different mechanical systems governed by the same physics). In this paper, we fix A but vary G,f, b, n, and learn an iterator that solves a class of PDE problems governed by the same A. For a discretized PDE problem (A,G, f, b, n) and given a standard (hand designed) iterative solver Ψ, our goal is to improve upon Ψ and learn a solver Φ that has (1) correct fixed point and (2) fast convergence (on average) on the class of problems of interest. We will proceed to parameterize a family of Φ that satisfies (1) by design, and achieve (2) by optimization.
In practice, we can only train Φ on a small number of problems (A, fi, Gi, bi, ni). To be useful, Φ must deliver good performance on every choice of G, f, b, and different grid sizes n. We show, theoretically and empirically, that our iterator family has good generalization properties: even if we train on a single problem (A,G, f, b, n), the iterator performs well on very different choices of G, f, b, and grid size n. For example, we train our iterator on a 64× 64 square domain, and test on a 256× 256 L-shaped domain (see Figure 1).
3.1 FORMULATION
For a fixed PDE problem class A, let Ψ be a standard linear iterative solver known to be valid. We will use more formal notation Ψ(u;G, f, b, n) as Ψ is a function of u, but also depends onG, f, b, n. Our assumption is that for any choice of G, f, b, n (but fixed PDE classA), Ψ(u;G, f, b, n) is valid. We previously showed that Jacobi iterator Eq.(8,9) have this property for the Poisson PDE class.
We design our new family of iterators ΦH : Rn 2 → Rn2 as
w = Ψ(u;G, f, b, n)− u ΦH(u;G, f, b, n) = Ψ(u;G, f, b, n) +GHw
(10)
where H is a learned linear operator (it satisfies H0 = 0). The term GHw can be interpreted as a correction term to Ψ(u;G, f, b, n). When there is no confusion, we neglect the dependence on G, f, b, n and denote as Ψ(u) and ΦH(u).
ΦH should have similar computation complexity as Ψ. Therefore, we chooseH to be a convolutional operator, which can be parameterized by a deep linear convolutional network. We will discuss the parameterization of H in detail in Section 3.4; we first prove some parameterization independent properties.
The correct PDE solution is a fixed point of ΦH by the following lemma: Lemma 1. For any PDE problem (A,G, f, b, n) and choice of H , if u∗ is a fixed point of Ψ, it is a fixed point of ΦH in Eq. (10).
Proof. Based on the iterative rule in Eq. (10), if u∗ satisfies Ψ(u∗) = u∗ then w = Ψ(u∗)−u∗ = 0. Therefore, ΦH(u∗) = Ψ(u∗) +GH0 = u∗.
Moreover, the space of ΦH subsumes the standard solver Ψ. If H = 0, then ΦH = Ψ. Furthermore, denote Ψ(u) = Tu+ c, then if H = T , then since GT = T (see Eq. (7)),
ΦH(u) = Ψ(u) +GT (Ψ(u)− u) = TΨ(u) + c = Ψ2(u) (11) which is equal to two iterations of Ψ. Computing Ψ requires one convolution T , while computing ΦH requires two convolutions: T and H . Therefore, if we choose H = T , then ΦH computes two iterations of Ψ with two convolutions: it is at least as efficient as the standard solver Ψ.
3.2 TRAINING AND GENERALIZATION
We train our iterator ΦH(u;G, f, b, n) to converge quickly to the ground truth solution on a set D = {(Gl, fl, bl, nl)}Ml=1 of problem instances. For each instance, the ground truth solution u∗ is obtained from the existing solver Ψ. The learning objective is then
min H ∑ (Gl,fl,bl,nl)∈D Eu0∼N (0,1)‖ΦkH(u0;Gl, fl, bl, nl)− u∗‖22 (12)
Intuitively, we look for a matrix H such that the corresponding iterator ΦH will get us as close as possible to the solution in k steps, starting from a random initialization u0 sampled from a white Gaussian. k in our experiments is uniformly chosen from [1, 20], similar to the procedure in (Song et al., 2017). Smaller k is easier to learn with less steps to back-propagate through, while larger k better approximates our test-time setting: we care about the final approximation accuracy after a given number of iteration steps. Combining smaller and larger k performs best in practice.
We show in the following theorem that there is a convex open set of H that the learning algorithm can explore. To simplify the statement of the theorem, for any linear iterator Φ(u) = Tu + c we will refer to the spectral radius (norm) of Φ as the spectral radius (norm) of T . Theorem 2. For fixed G, f, b, n, the spectral norm of ΦH(u;G, f, b, n) is a convex function of H , and the set of H such that the spectral norm of ΦH(u;G, f, b, n) < 1 is a convex open set.
Proof. See Appendix A.
Therefore, to find an iterator with small spectral norm, the learning algorithm only has to explore a convex open set. Note that Theorem 2 holds for spectral norm, whereas validity requires small spectral radius in Theorem 1. Nonetheless, several important PDE problems (Poisson, Helmholtz, etc) are symmetric, so it is natural to use a symmetric iterator, which means that spectral norm is equal to spectral radius. In our experiments, we do not explicitly enforce symmetry, but we observe that the optimization finds symmetric iterators automatically.
For training, we use a single grid size n, a single geometryG, f = 0, and a restricted set of boundary conditions b. The geometry we use is a square domain shown in Figure 1a. Although we train on a single domain, the model has surprising generalization properties, which we show in the following: Proposition 2. For fixed A,G, n and fixed H , if for some f0, b0, ΦH(u;G, f0, b0, n) is valid for the PDE problem (A,G, f0, b0, n), then for all f and b, the iterator ΦH(u;G, f, b, n) is valid for the PDE problem (A,G, f, b, n).
Proof. See Appendix A.
The proposition states that we freely generalize to different f and b. There is no guarantee that we can generalize to different G and n. Generalization to different G and n has to be empirically verified: in our experiments, our learned iterator converges to the correct solution for a variety of grid sizes n and geometries G, even though it was only trained on one grid size and geometry.
Even when generalization fails, there is no risk of obtaining incorrect results. The iterator will simply fail to converge. This is because according to Lemma 1, fixed points of our new iterator is the same as the fixed point of hand designed iterator Ψ. Therefore if our iterator is convergent, it is valid.
3.3 INTERPRETATION OF H
What is H trying to approximate? In this section we show that we are training our linear function GH to approximate T (I − T )−1: if it were able to approximate T (I − T )−1 perfectly, our iterator ΦH will converge to the correct solution in a single iteration.
Let the original update rule be Ψ(u) = Tu + c, and the unknown ground truth solution be u∗ satisfying u∗ = Tu∗ + c. Let r = u∗ − u be the current error, and e = u∗ −Ψ(u) be the new error after applying one step of Ψ. They are related by
e = u∗ −Ψ(u) = u∗ − (Tu+ c) = T (u∗ − u) = Tr (13)
In addition, let w = Ψ(u)− u be the update Ψ makes. This is related to the current error r by
w = Ψ(u)− u = Tu+ c− u+ (u∗ − Tu∗ − c) = T (u− u∗) + (u∗ − u) = (I − T )r (14)
From Eq. (10) we can observe that the linear operator GH takes as input Ψ’s update w, and tries to approximate the error e: GHw ≈ e. If the approximation were perfect: GHw = e, the iterator ΦH would converge in a single iteration. Therefore, we are trying to find some linear operator R, such that Rw = e. In fact, if we combine Eq. (13) and Eq. (14), we can observe that T (I − T )−1 is (uniquely) the linear operator we are looking for
T (I − T )−1w = e (15)
where (I − T )−1 exists because ρ(T ) < 1, so all eigenvalues of I − T must be strictly positive. Therefore, we would like our linear function GH to approximate T (I − T )−1. Note that (I − T )−1 is a dense matrix in general, meaning that it is impossible to exactly achieve GH = T (I−T )−1 with a convolutional operatorH . However, the betterGH is able to approximate T (I − T )−1, the faster our iterator converges to the solution u∗.
3.4 LINEAR DEEP NETWORKS
In our iterator design, H is a linear function parameterized by a linear deep network without nonlinearity or bias terms. Even though our objective in Eq. (12) is a non-linear function of the parameters of the deep network, this is not an issue in practice. In particular, Arora et al. (2018) observes that when modeling linear functions, deep networks can be faster to optimize with gradient descent compared to linear ones, despite non-convexity.
Even though a linear deep network can only represent a linear function, it has several advantages. On an n×n grid, each convolution layer only requires O(n2) computation and have a constant number of parameters, while a general linear function requires O(n4) computation and have O(n4) parameters. Stacking d convolution layers allows us to parameterize complex linear functions with large receptive fields, while only requiring O(dn2) computation and O(d) parameters. We experiment on two types of linear deep networks:
Conv model. We model H as a network with 3 × 3 convolutional layers without non-linearity or bias. We will refer to a model with k layers as “Convk”, e.g. Conv3 has 3 convolutional layers.
U-Net model. The Conv models suffer from the same problem as Jacobi: the receptive field grows only by 1 for each additional layer. To resolve this problem, we design the deep network counterpart of the Multigrid method. Instead of manually designing the sub-sampling / super-sampling functions, we use a U-Net architecture (Ronneberger et al., 2015) to learn them from data. Because each layer reduces the grid size by half, and the i-th layer of the U-Net only operates on (2−in)-sized grids, the total computation is only increased by a factor of
1 + 1/4 + 1/16 + · · · < 4/3
compared to a two-layer convolution. The minimal overhead provides a very large improvement of convergence speed in our experiments. We will refer to Multigrid and U-Net models with k sub-sampling layers as Multigridk and U-Netk, e.g. U-Net2 is a model with 2 sub-sampling layers.
4 EXPERIMENTS
4.1 SETTING
We evaluate our method on the 2D Poisson equation with Dirichlet boundary conditions, ∇2u = f. There exist several iterative solvers for the Poisson equation, including Jacobi, Gauss-Seidel, conjugate-gradient, and multigrid methods. We select the Jacobi method as our standard solver Ψ.
To reemphasize, our goal is to train a model on simple domains where the ground truth solutions can be easily obtained, and then evaluate its performance on different geometries and boundary conditions. Therefore, for training, we select the simplest Laplace equation, ∇2u = 0, on a square domain with boundary conditions such that each side is a random fixed value. Figure 1a shows an
example of our training domain and its ground truth solution. This setting is also used in Farimani et al. (2017) and Sharma et al. (2018).
For testing, we use larger grid sizes than training. For example, we test on 256×256 grid for a model trained on 64 × 64 grids. Moreover, we designed challenging geometries to test the generalization of our models. We test generalization on 4 different settings: (i) same geometry but larger grid, (ii) L-shape geometry, (iii) Cylinders geometry, and (iv) Poisson equation in same geometry, but f 6= 0. The two geometries are designed because the models were trained on square domains and have never seen sharp or curved boundaries. Examples of the 4 settings are shown in Figure 1.
4.2 EVALUATION
As discussed in Section 2.4, the convergence rate of any linear iterator can be determined from the spectral radius ρ(T ), which provides guarantees on convergence and convergence rate. However, a fair comparison should also consider the computation cost of H . Thus, we evaluate the convergence rate by calculating the computation cost required for the error to drop below a certain threshold.
On GPU, the Jacobi iterator and our model can both be efficiently implemented as convolutional layers. Thus, we measure the computation cost by the number of convolutional layers. On CPU, each Jacobi iteration u′i,j = 1 4 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1) has 4 multiply-add operations, while a 3 × 3 convolutional kernel requires 9 operations, so we measure the computation cost by the number of multiply-add operations. This metric is biased in favor of Jacobi because there is little practical reason to implement convolutions on CPU. Nonetheless, we report both metrics in our experiments.
4.3 CONV MODEL
Table 1 shows results of the Conv model. The model is trained on a 16 × 16 square domain, and tested on 64 × 64. For all settings, our models converge to the correct solution, and require less computation than Jacobi. The best model, Conv3, is ∼ 5× faster than Jacobi in terms of layers, and ∼ 2.5× faster in terms of multiply-add operations. As discussed in Section 3.2, if our iterator converges for a geometry, then it is guaranteed to converge to the correct solution for any f and boundary values b. The experiment results show that our model not only converges but also converges faster than the standard solver, even though it is only trained on a smaller square domain.
4.4 U-NET MODEL
For the U-Net models, we compare them against Multigrid models with the same number of subsampling and smoothing layers. Therefore, our models have the same number of convolutional layers, and roughly 9/4 times the number of operations compared to Multigrid. The model is trained on a 64× 64 square domain, and tested on 256× 256. The bottom part of Table 1 shows the results of the U-Net model. Similar to the results of Conv models, our models outperforms Multigrid in all settings. Note that U-Net2 has lower computation
cost compared with Multigrid2 than U-Net3 compared to Multigrid 3. This is because Multigrid2 is a relatively worse baseline. U-Net3 still converges faster than U-Net2.
4.5 COMPARISON WITH FENICS
The FEniCS package (Logg et al., 2012) provides a collection of tools with high-level Python and C++ interfaces to solve differential equations. The open-source project is developed and maintained by a global community of scientists and software developers. Its extensive optimization over the years, including the support for parallel computation, has led to its widespread adaption in industry and academia (Alnæs et al., 2015).
We measure the wall clock time of the FEniCS model and our model, run on the same hardware. The FEniCS model is set to be the minimal residual method with algebraic multigrid preconditioner, which we measure to be the fastest compared to other methods such as Jacobi or Incomplete LU factorization preconditioner. We ignore the time it takes to set up geometry and boundary conditions, and only consider the time the solver takes to solve the problem. We set the error threshold to be 1 percent of the initial error. For the square domain, we use a quadrilateral mesh. For the L-shape and cylinder domains, however, we let FEniCS generate the mesh automatically, while ensuring the number of mesh points to be similar.
Figure 2 shows that our model is comparable or faster than FEniCS in wall clock time. These experiments are all done on CPU. Our model efficiently runs on GPU, while the fast but complex methods in FEniCS do not have efficient GPU implementations available. On GPU, we measure an additional 30× speedup (on Tesla K80 GPU, compared with a 64-core CPU).
5 RELATED WORK
Recently, there have been several works on applying deep learning to solve the Poisson equation. However, to the best of our knowledge, previous works used deep networks to directly generate the solution; they have no correctness guarantees and are not generalizable to arbitrary grid sizes and boundary conditions. Most related to our work are (Farimani et al., 2017) and (Sharma et al., 2018), which learn deep networks to output the solution of the 2D Laplace equation (a special case where f = 0). (Farimani et al., 2017) trained a U-Net model that takes in the boundary condition as a 2D image and outputs the solution. The model is trained by L1 loss to the ground truth solution and an adversarial discriminator loss. (Sharma et al., 2018) also trained a U-net model but used a weakly-supervised loss. There are other related works that solved the Poisson equation in concrete physical problems. (Tang et al., 2017) solved for electric potential in 2D/3D space; (Tompson et al., 2017) solved for pressure fields for fluid simulation; (Zhang et al., 2018) solved particle simulation of a PN Junction.
There are other works that solve other types of PDEs. For example, many studies aimed to use deep learning to accelerate and approximate fluid dynamics, governed by the Euler equation or the Navier-Stokes equations (Guo et al., 2016; Yang et al., 2016; Chu & Thuerey, 2017; Kutz, 2017). (Eismann et al., 2018) use Bayesian optimization to design shapes with reduced drag coefficients in laminar fluid flow. Other applications include solving the Schrodinger equation (Mills et al., 2017), turbulence modeling (Singh et al., 2017), and the American options and Black Scholes PDE (Sirignano & Spiliopoulos, 2018). A lot of these PDEs are nonlinear and may not have a standard linear iterative solver, which is a limitation to our current method since our model must be built on top of an existing linear solver to ensure correctness. We consider the extension to different PDEs as future work.
6 CONCLUSION
We presented a method to learn an iterative solver for PDEs that improves on an existing standard solver. The correct solution is theoretically guaranteed to be the fixed point of our iterator. We show that our model, trained on simple domains, can generalize to different grid sizes, geometries and boundary conditions. It converges correctly and achieves significant speedups compared to standard solvers, including highly optimized ones implemented in FEniCS.
A PROOFS
Theorem 1. For a linear iterator Ψ(u) = Tu+ c, Ψ converges to a unique stable fixed point from any initialization if and only if the spectral radius ρ(T ) < 1.
Proof. Suppose ρ(T ) < 1, then (I−T )−1 must exist because all eigenvalues of I−T must be strictly positive. Let u∗ = (I −T )−1c; this u∗ is a stationary point of the iterator Ψ, i.e. u∗ = Tu∗+ c. For any initialization u0, let uk = Ψk(u0). The error ek = u∗ − uk satisfies
Tek = (Tu∗ + c)− (Tuk + c) = u∗ − uk+1 = ek+1 ⇒ ek = T ke0 (16)
Since ρ(T ) < 1, we know T k → 0 as k → ∞ (LeVeque, 2007), which means the error ek → 0. Therefore, Ψ converges to u∗ from any u0.
Now suppose ρ(T ) ≥ 1. Let λ1 be the largest absolute eigenvalue where ρ(T ) = |λ1| ≥ 1, and v1 be its corresponding eigenvector. We select initialization u0 = u∗ + v1, then e0 = v1. Because |λ1| ≥ 1, we have |λk1 | ≥ 1, then T ke0 = λk1v1 6→k→∞0 However we know that under a different initialization û0 = u∗, we have ê0 = 0, so T kê0 = 0. Therefore the iteration cannot converge to the same fixed point from different initializations u0 and û0.
Proposition 1 If M is a full rank diagonal matrix, and u∗ ∈ Rn2×n2 satisfies Eq. (7), then u∗ satisfies Eq. (4).
Proof of Proposition 1. Let u∗ be a fixed point of Eq. (7) then
Gu∗ + (I −G)u∗ = G(M−1Nu∗ +M−1f) + (I −G)b
This is equivalent to (I −G)u∗ = (I −G)b
G(u∗ −M−1Nu∗ −M−1f) = 0 (17)
The latter equation is equivalent to GM−1(Au∗ − f) = 0. If M is a full rank diagonal matrix, this implies G(Au∗ − f) = 0, which is GAu∗ = Gf . Therefore, u∗ satisfies Eq.(4).
Theorem 2. For fixed G, f, b, n, the spectral norm of ΦH(u;G, f, b, n) is a convex function of H , and the set of H such that the spectral norm of ΦH(u;G, f, b, n) < 1 is a convex open set.
Proof. As before, denote Ψ(u) = Tu+ c. Observe that
ΦH(u;G, f, b, n) = Tu+ c+GH(Tu+ c− u) = (T +GHT −GH)u+GHc+ c (18)
The spectral norm ‖·‖2 is convex with respect to its argument, and (T + GHT − GH) is linear in H . Thus, ‖T + GHT − GH‖2 is convex in H as well. Thus, under the condition that ‖T + GHT −GH‖2 < 1, the set of H must be convex because it is a sub-level set of the convex function ‖T +GHT −GH‖2. To prove that it is open, observe that ‖·‖2 is a continuous function, so ‖T + GHT − GH‖2 is a continuous map from H to the spectral radius of ΦH . If we consider the set of H such that ‖T +GHT −GH‖2 < 1, this set is the preimage of (− , 1) for any > 0. As (− , 1) is open, its preimage must be open.
Proposition 2. For fixed A,G, n and fixed H , if for some f0, b0, ΦH(u;G, f0, b0, n) is valid for the PDE problem (A,G, f0, b0, n), then for all f and b, the iterator ΦH(u;G, f, b, n) is valid for the PDE problem (A,G, f, b, n).
Proof. From Theorem 1 and Lemma 1, our iterator is valid if and only if ρ(T +GHT −GH) < 1. The iterator T + GHT − GH only depends on A,G, and is independent of the constant c in Eq. (18). Thus, the validity of the iterator is independent with f and b. Thus, if the iterator is valid for some f0 and b0, then it is valid for any choice of f and b.
B PROOF OF CONVERGENCE OF JACOBI METHOD
In Section 2.4.1, we show that for Poisson equation, the update matrix T = G(I − A). We now formally prove that ρ(G(I −A)) < 1 for any G. For any matrix T , the spectral radius is bounded by the spectral norm: ρ(T ) ≤ ‖T‖2, and the equality holds if T is symmetric. Since (I − A) is a symmetric matrix, ρ(I − A) = ‖I − A‖2. It has been proven that ρ(I − A) < 1 (Frankel, 1950). Moreover, ‖G‖2 = 1. Finally, matrix norms are sub-multiplicative, so
ρ(T ) ≤ ‖G(I −A)‖2 ≤ ‖G‖2‖I −A‖2 < 1 (19)
ρ(T ) < 1 is true for anyG. Thus, the standard Jacobi method is valid for the Poisson equation under any geometry. | 1. What is the focus of the paper, and how does it bridge distinct bodies of literature?
2. What are the strengths of the paper, particularly in its connection to modern machine learning ideas?
3. What are the weaknesses of the paper regarding prior work on data-driven methods for improving PDE solvers?
4. How could the authors improve their discussion of related work in this area?
5. What motivation could be provided for the overall setup of equation (6)?
6. Why did the authors not impose symmetry conditions on the convolutions in H, such as invariance to flips of the kernel?
7. Can multiple candidate solutions exist for valid iterators converging to a valid solution? How would one construct a method to find all possible solutions?
8. Why randomize the value of k in (9), and wouldn't learning a different H depending on the computation budget used downstream be more effective?
9. Would learning a different H_i for each step i of the iterative solver be a good idea?
10. Could the authors provide more information on enforcing b by clamping values at the end of each iteration, and are there alternatives that guarantee u satisfies b always? | Review | Review
==Summary==
This paper is well-executed and interesting. It does a good job of bridging the gap between distinct bodies of literature, and is very in touch with modern ML ideas.
I like this paper and advocate that it is accepted. However, I expect that it would have higher impact if it appeared in the numerical PDE community. I encourage you to consider this conference paper to be an early version of a more comprehensive piece of work to be released to that community.
My main critique is that the paper needs to do a better job of discussing prior work on data-driven methods for improving PDE solvers.
==Major comments==
* You need to spend considerably more space discussing the related work on using ML to improve PDE solvers. Most readers will be unfamiliar with this. You should explain what they do and how they are qualitatively different than your approach.
* You do a good job 3.3 of motivating for what H is doing. However, you could do a better job of motivating the overall setup of (6). Is this a common formulation? If so, where else is it used?
* I’m surprised that you didn’t impose some sort of symmetry conditions on the convolutions in H, such as that they are invariant to flips of the kernel. This is true, for example, for the linearized Poisson operator.
==Minor comments==
* Valid iterators converge to a valid solution. However, can’t there be multiple candidate solutions? How would you construct a method that would be able to find all possible solutions?
* In (9), why do you randomize the value of k? Wouldn’t you want to learn a different H depending on what computation budget you knew you were going to use downstream when you deploy the solver?
* In future work it may make sense to learn a different H_i for each step i of the iterative solver.
* When introducing iterative solvers, you leave it as an afterthought that b will be enforced by clamping values at the end of each iteration. This seems like a pretty important design decision. Are there alternatives that guarantee that u satisfies b always, rather than updating u in such a way that it violates G and then clamping it back? Along these lines, it might be useful to pose (2) with additional terms in the linear system to reflect G. |
ICLR | Title
Learning Neural PDE Solvers with Convergence Guarantees
Abstract
Partial differential equations (PDEs) are widely used across the physical and computational sciences. Decades of research and engineering went into designing fast iterative solution methods. Existing solvers are general purpose, but may be sub-optimal for specific classes of problems. In contrast to existing hand-crafted solutions, we propose an approach to learn a fast iterative solver tailored to a specific domain. We achieve this goal by learning to modify the updates of an existing solver using a deep neural network. Crucially, our approach is proven to preserve strong correctness and convergence guarantees. After training on a single geometry, our model generalizes to a wide variety of geometries and boundary conditions, and achieves 2-3 times speedup compared to state-of-the-art solvers.
1 INTRODUCTION
Partial differential equations (PDEs) are ubiquitous tools for modeling physical phenomena, such as heat, electrostatics, and quantum mechanics. Traditionally, PDEs are solved with hand-crafted approaches that iteratively update and improve a candidate solution until convergence. Decades of research and engineering went into designing update rules with fast convergence properties.
The performance of existing solvers varies greatly across application domains, with no method uniformly dominating the others. Generic solvers are typically effective, but could be far from optimal for specific domains. In addition, high performing update rules could be too complex to design by hand. In recent years, we have seen that for many classical problems, complex updates learned from data or experience can out-perform hand-crafted ones. For example, for Markov chain Monte Carlo, learned proposal distributions lead to orders of magnitude speedups compared to handdesigned ones (Song et al., 2017; Levy et al., 2017). Other domains that benefited significantly include learned optimizers (Andrychowicz et al., 2016) and learned data structures (Kraska et al., 2018). Our goal is to bring similar benefits to PDE solvers.
Hand-designed solvers are relatively simple to analyze and are guaranteed to be correct in a large class of problems. The main challenge is how to provide the same guarantees with a potentially much more complex learned solver. To achieve this goal, we build our learned iterator on top of an existing standard iterative solver to inherit its desirable properties. The iterative solver updates the solution at each step, and we learn a parameterized function to modify this update. This function class is chosen so that for any choice of parameters, the fixed point of the original iterator is preserved. This guarantees correctness, and training can be performed to enhance convergence speed. Because of this design, we only train on a single problem instance; our model correctly generalizes to a variety of different geometries and boundary conditions with no observable loss of performance. As a result, our approach provides: (i) theoretical guarantees of convergence to the correct stationary solution, (ii) faster convergence than existing solvers, and (iii) generalizes to geometries and boundary conditions very different from the ones seen at training time. This is in stark contrast with
existing deep learning approaches for PDE solving (Tang et al., 2017; Farimani et al., 2017) that are limited to specific geometries and boundary conditions, and offer no guarantee of correctness.
Our approach applies to any PDE with existing linear iterative solvers. As an example application, we solve the 2D Poisson equations. Our method achieves a 2-3× speedup on number of multiplyadd operations when compared to standard iterative solvers, even on domains that are significantly different from our training set. Moreover, compared with state-of-the-art solvers implemented in FEniCS (Logg et al., 2012), our method achieves faster performance in terms of wall clock CPU time. Our method is also simple as opposed to deeply optimized solvers such as our baseline in FEniCS (minimal residual method + algebraic multigrid preconditioner). Finally, since we utilize standard convolutional networks which can be easily parallelized on GPU, our approach leads to an additional 30× speedup when run on GPU.
2 BACKGROUND
In this section, we give a brief introduction of linear PDEs and iterative solvers. We refer readers to LeVeque (2007) for a thorough review.
2.1 LINEAR PDES
Linear PDE solvers find functions that satisfy a (possibly infinite) set of linear differential equations. More formally, let F = {u : Rk → R} be the space of candidate functions, and A : F → F be a linear operator; the goal is to find a function u ∈ F that satisfies a linear equation Au = f, where f is another function Rk → R given by our problem. Many PDEs fall into this framework. For example, heat diffusion satisfies ∇2u = f (Poisson equation), where ∇2 = ∂ 2
∂x21 + · · · + ∂
2
∂x2k
is the linear Laplace operator; u maps spatial coordinates (e.g. in R3) into its temperature, and f maps spatial coordinates into the heat in/out flow. Solving this equation lets us know the stationary temperature given specified heat in/out flow.
Usually the equation Au = f does not uniquely determine u. For example, u = constant for any constant is a solution to the equation ∇2u = 0. To ensure a unique solution we provide additional equations, called “boundary conditions”. Several boundary conditions arise very naturally in physical problems. A very common one is the Dirichlet boundary condition, where we pick some subset G ⊂ Rk and fix the values of the function on G to some fixed value b,
u(x) = b(x), for all x ∈ G where the function b is usually clear from the underlying physical problem. As in previous literature, we refer to G as the geometry of the problem, and b as the boundary value. We refer to the pair (G, b) as the boundary condition. In this paper, we only consider linear PDEs and boundary conditions that have unique solutions.
2.2 FINITE DIFFERENCE METHOD
Most real-world PDEs do not admit an analytic solution and must be solved numerically. The first step is to discretize the solution space F from Rk → R into Dk → R, where D is a discrete subset of R. When the space is compact, it is discretized into an n×n×n · · · (k many) uniform Cartesian grid with mesh width h. Any function in F is approximated by its value on the nk grid points. We denote the discretized function as a vector u in Rnk . In this paper, we focus on 2D problems (k = 2), but the strategy applies to any dimension.
We discretize all three terms in the equation Au = f and boundary condition (G, b). The PDE solution u is discretized such that ui,j = u(xi, yj) corresponds to the value of u at grid point (xi, yj). We can similarly discretize fand b. In linear PDEs, the linear operator A is a linear combination of partial derivative operators. For example, for the Poisson equation A = ∇2 = ∑ i ∂2
∂x2i .
Therefore we can first discretize each partial derivative, then linearly combine the discretized partial derivatives to obtain a discretized A. Finite difference is a method that approximates partial derivatives in a discretized space, and as mesh width h → 0, the approximation approaches the true derivative. For example, ∂ 2
∂x2 u can
be discretized in 2D as ∂ 2
∂x2 u ≈ 1 h2 (ui−1,j − 2ui,j + ui+1,j), the Laplace operator in 2D can be
correspondingly approximated as:
∇2u = ∂ 2u ∂x2 + ∂2u ∂y2 ≈ 1 h2 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1 − 4ui,j) (1)
After discretization, we can rewrite Au = fas a linear matrix equation
Au = f (2)
where u, f ∈ Rn2 , and A is a matrix in Rn2×n2 (these are n2 dimensional because we focus on 2D problems). In many PDEs such as the Poisson and Helmholtz equation, A is sparse, banded, and symmetric.
2.3 BOUNDARY CONDITION
We also need to include the boundary condition u(x) = b(x) for all x ∈ G. If a discretized point (xi, yj) belongs to G, we need to fix the value of ui,j to bi,j . To achieve this, we first define e ∈ {0, 1}n2 to be a vector of 0’s and 1’s, in which 0 indicates that the corresponding point belongs to G. Then, we define a “reset” matrix G = diag(e), a diagonal matrix Rn2 → Rn2 such that
(Gu)i,j = { ui,j (xi, yj) 6∈ G 0 (xi, yj) ∈ G
(3)
Intuitively G ”masks” every point in G to 0. Similarly, I − G can mask every point not in G to 0. Note that the boundary values are fixed and do not need to satisfy Au = f . Thus, the solution u to the PDE under geometry G should satisfy:
G(Au) = Gf
(I −G)u = (I −G)b (4)
The first equation ensures that the interior points (points not in G) satisfy Au = f , and the second ensures that the boundary condition is satisfied.
To summarize, (A,G,f, b, n) is our PDE problem, and we first discretize the problem on an n× n grid to obtain (A,G, f, b, n). Our objective is to obtain a solution u that satisfies Eq. (4), i.e. Au = f for the interior points and boundary condition ui,j = bi,j , ∀(xi, yj) ∈ G.
2.4 ITERATIVE SOLVERS
A linear iterative solver is defined as a function that inputs the current proposed solution u ∈ Rn2 and outputs an updated solution u′. Formally it is a function Ψ : Rn2 → Rn2 that can be expressed as u′ = Ψ(u) = Tu+ c (5) where T is a constant update matrix and c is a constant vector. For each iterator Ψ there may be special vectors u∗ ∈ Rn2 that satisfy u∗ = Ψ(u∗). These vectors are called fixed points.
The iterative solver Ψ should map any initial u0 ∈ Rn2 to a correct solution of the PDE problem. This is formalized in the following theorem. Definition 1 (Valid Iterator). An iterator Ψ is valid w.r.t. a PDE problem (A,G, f, b, n) if it satisfies:
a) Convergence: There is a unique fixed point u∗ such that Ψ converges to u∗ from any initialization: ∀u0 ∈ Rn2 , limk→∞Ψk(u0) = u∗.
b) Fixed Point: The fixed point u∗ is the solution to the linear systemAu = f under boundary condition (G, b).
Convergence: Condition (a) in Definition 1 is satisfied if the matrix T is convergent, i.e. T k → 0 as k → ∞. It has been proven that T is convergent if and only if the spectral radius ρ(T ) < 1 (Olver, 2008):
Theorem 1. (Olver, 2008, Prop 7.25) For a linear iterator Ψ(u) = Tu + c, Ψ converges to a unique stable fixed point from any initialization if and only if the spectral radius ρ(T ) < 1.
Proof. See Appendix A.
It is important to note that Condition (a) only depends on T and not the constant c.
Fixed Point: Condition (b) in Definition 1 contains two requirements: satisfy Au = f , and the boundary condition (G, b). To satisfyAu = f a standard approach is to design Ψ by matrix splitting: split the matrix A into A = M − N ; rewrite Au = f as Mu = Nu + f (LeVeque, 2007). This naturally suggests the iterative update
u′ = M−1Nu+M−1f (6)
Because Eq. (6) is a rewrite of Au = f , stationary points u∗ of Eq. (6) satisfy Au∗ = f . Clearly, the choices of M and N are arbitrary but crucial. From Theorem 1, we must choose M such that the update converges. In addition, M−1 must easy to compute (e.g., diagonal).
Finally we also need to satisfy the boundary condition (I − G)u = (I − G)b in Eq.4. After each update in Eq. (6), the boundary condition could be violated. We use the “reset” operator defined in Eq. (3) to “reset” the values of ui,j to bi,j by Gu+ (I −G)b. The final update rule becomes
u′ = G(M−1Nu+M−1f) + (I −G)b (7)
Despite the added complexity, it is still a linear update rule in the form of u′ = Tu + c in Eq. (5): we have T = GM−1N and c = GM−1f + (1−G)b. As long as M is a full rank diagonal matrix, fixed points of this equation satisfies Eq. (4). In other words, such a fixed point is a solution of the PDE problem (A,G, f, b, n).
Proposition 1. If M is a full rank diagonal matrix, and u∗ ∈ Rn2×n2 satisfies Eq. (7), then u∗ satisfies Eq. (4).
2.4.1 JACOBI METHOD
A simple but effective way to choose M is the Jacobi method, which sets M = I (a full rank diagonal matrix, as required by Proposition 1). For Poisson equations, this update rule has the following form,
ûi,j = 1
4 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1) +
h2
4 fi,j (8)
u′ = Gû+ (1−G)b (9)
For Poisson equations and any geometry G, the update matrix T = G(I − A) has spectral radius ρ(T ) < 1 (see Appendix B). In addition, by Proposition 1 any fixed point of the update rule Eq.(8,9) must satisfy Eq. (4). Both convergence and fixed point conditions from Definition 1 are satisfied: Jacobi iterator Eq.(8,9) is valid for any Poisson PDE problem.
In addition, each step of the Jacobi update can be implemented as a neural network layer, i.e., Eq.
(8) can be efficiently implemented by convolving u with kernel
( 0 1/4 0
1/4 0 1/4 0 1/4 0
) and adding
h2f/4. The “reset” step in Eq. (9) can also be implemented as multiplying u with G and adding the boundary values (1−G)b.
2.4.2 MULTIGRID METHOD
The Jacobi method has very slow convergence rate (LeVeque, 2007). This is evident from the update rule, where the value at each grid point is only influenced by its immediate neighbors. To propagate information from one grid point to another, we need as many iterations as their distance on the grid. The key insight of the Multigrid method is to perform Jacobi updates on a downsampled (coarser) grid and then upsample the results. A common structure is the V-cycle (Briggs et al., 2000). In each
V-cycle, there are k downsampling layers followed by k upsampling layers, and multiple Jacobi updates are performed at each resolution. The downsampling and upsampling operations are also called restriction and prolongation, and are often implemented using weighted restriction and linear interpolation respectively. The advantage of the multigrid method is clear: on a downsampled grid (by a factor of 2) with mesh width 2h, information propagation is twice as fast, and each iteration requires only 1/4 operations compared to the original grid with mesh width h.
3 LEARNING FAST AND PROVABLY CORRECT ITERATIVE PDE SOLVERS
A PDE problem consists of five components (A,G,f, b, n). One is often interested in solving the same PDE class A under varying f, discretization n, and boundary conditions (G, b). For example, solving the Poisson equation under different boundary conditions (e.g., corresponding to different mechanical systems governed by the same physics). In this paper, we fix A but vary G,f, b, n, and learn an iterator that solves a class of PDE problems governed by the same A. For a discretized PDE problem (A,G, f, b, n) and given a standard (hand designed) iterative solver Ψ, our goal is to improve upon Ψ and learn a solver Φ that has (1) correct fixed point and (2) fast convergence (on average) on the class of problems of interest. We will proceed to parameterize a family of Φ that satisfies (1) by design, and achieve (2) by optimization.
In practice, we can only train Φ on a small number of problems (A, fi, Gi, bi, ni). To be useful, Φ must deliver good performance on every choice of G, f, b, and different grid sizes n. We show, theoretically and empirically, that our iterator family has good generalization properties: even if we train on a single problem (A,G, f, b, n), the iterator performs well on very different choices of G, f, b, and grid size n. For example, we train our iterator on a 64× 64 square domain, and test on a 256× 256 L-shaped domain (see Figure 1).
3.1 FORMULATION
For a fixed PDE problem class A, let Ψ be a standard linear iterative solver known to be valid. We will use more formal notation Ψ(u;G, f, b, n) as Ψ is a function of u, but also depends onG, f, b, n. Our assumption is that for any choice of G, f, b, n (but fixed PDE classA), Ψ(u;G, f, b, n) is valid. We previously showed that Jacobi iterator Eq.(8,9) have this property for the Poisson PDE class.
We design our new family of iterators ΦH : Rn 2 → Rn2 as
w = Ψ(u;G, f, b, n)− u ΦH(u;G, f, b, n) = Ψ(u;G, f, b, n) +GHw
(10)
where H is a learned linear operator (it satisfies H0 = 0). The term GHw can be interpreted as a correction term to Ψ(u;G, f, b, n). When there is no confusion, we neglect the dependence on G, f, b, n and denote as Ψ(u) and ΦH(u).
ΦH should have similar computation complexity as Ψ. Therefore, we chooseH to be a convolutional operator, which can be parameterized by a deep linear convolutional network. We will discuss the parameterization of H in detail in Section 3.4; we first prove some parameterization independent properties.
The correct PDE solution is a fixed point of ΦH by the following lemma: Lemma 1. For any PDE problem (A,G, f, b, n) and choice of H , if u∗ is a fixed point of Ψ, it is a fixed point of ΦH in Eq. (10).
Proof. Based on the iterative rule in Eq. (10), if u∗ satisfies Ψ(u∗) = u∗ then w = Ψ(u∗)−u∗ = 0. Therefore, ΦH(u∗) = Ψ(u∗) +GH0 = u∗.
Moreover, the space of ΦH subsumes the standard solver Ψ. If H = 0, then ΦH = Ψ. Furthermore, denote Ψ(u) = Tu+ c, then if H = T , then since GT = T (see Eq. (7)),
ΦH(u) = Ψ(u) +GT (Ψ(u)− u) = TΨ(u) + c = Ψ2(u) (11) which is equal to two iterations of Ψ. Computing Ψ requires one convolution T , while computing ΦH requires two convolutions: T and H . Therefore, if we choose H = T , then ΦH computes two iterations of Ψ with two convolutions: it is at least as efficient as the standard solver Ψ.
3.2 TRAINING AND GENERALIZATION
We train our iterator ΦH(u;G, f, b, n) to converge quickly to the ground truth solution on a set D = {(Gl, fl, bl, nl)}Ml=1 of problem instances. For each instance, the ground truth solution u∗ is obtained from the existing solver Ψ. The learning objective is then
min H ∑ (Gl,fl,bl,nl)∈D Eu0∼N (0,1)‖ΦkH(u0;Gl, fl, bl, nl)− u∗‖22 (12)
Intuitively, we look for a matrix H such that the corresponding iterator ΦH will get us as close as possible to the solution in k steps, starting from a random initialization u0 sampled from a white Gaussian. k in our experiments is uniformly chosen from [1, 20], similar to the procedure in (Song et al., 2017). Smaller k is easier to learn with less steps to back-propagate through, while larger k better approximates our test-time setting: we care about the final approximation accuracy after a given number of iteration steps. Combining smaller and larger k performs best in practice.
We show in the following theorem that there is a convex open set of H that the learning algorithm can explore. To simplify the statement of the theorem, for any linear iterator Φ(u) = Tu + c we will refer to the spectral radius (norm) of Φ as the spectral radius (norm) of T . Theorem 2. For fixed G, f, b, n, the spectral norm of ΦH(u;G, f, b, n) is a convex function of H , and the set of H such that the spectral norm of ΦH(u;G, f, b, n) < 1 is a convex open set.
Proof. See Appendix A.
Therefore, to find an iterator with small spectral norm, the learning algorithm only has to explore a convex open set. Note that Theorem 2 holds for spectral norm, whereas validity requires small spectral radius in Theorem 1. Nonetheless, several important PDE problems (Poisson, Helmholtz, etc) are symmetric, so it is natural to use a symmetric iterator, which means that spectral norm is equal to spectral radius. In our experiments, we do not explicitly enforce symmetry, but we observe that the optimization finds symmetric iterators automatically.
For training, we use a single grid size n, a single geometryG, f = 0, and a restricted set of boundary conditions b. The geometry we use is a square domain shown in Figure 1a. Although we train on a single domain, the model has surprising generalization properties, which we show in the following: Proposition 2. For fixed A,G, n and fixed H , if for some f0, b0, ΦH(u;G, f0, b0, n) is valid for the PDE problem (A,G, f0, b0, n), then for all f and b, the iterator ΦH(u;G, f, b, n) is valid for the PDE problem (A,G, f, b, n).
Proof. See Appendix A.
The proposition states that we freely generalize to different f and b. There is no guarantee that we can generalize to different G and n. Generalization to different G and n has to be empirically verified: in our experiments, our learned iterator converges to the correct solution for a variety of grid sizes n and geometries G, even though it was only trained on one grid size and geometry.
Even when generalization fails, there is no risk of obtaining incorrect results. The iterator will simply fail to converge. This is because according to Lemma 1, fixed points of our new iterator is the same as the fixed point of hand designed iterator Ψ. Therefore if our iterator is convergent, it is valid.
3.3 INTERPRETATION OF H
What is H trying to approximate? In this section we show that we are training our linear function GH to approximate T (I − T )−1: if it were able to approximate T (I − T )−1 perfectly, our iterator ΦH will converge to the correct solution in a single iteration.
Let the original update rule be Ψ(u) = Tu + c, and the unknown ground truth solution be u∗ satisfying u∗ = Tu∗ + c. Let r = u∗ − u be the current error, and e = u∗ −Ψ(u) be the new error after applying one step of Ψ. They are related by
e = u∗ −Ψ(u) = u∗ − (Tu+ c) = T (u∗ − u) = Tr (13)
In addition, let w = Ψ(u)− u be the update Ψ makes. This is related to the current error r by
w = Ψ(u)− u = Tu+ c− u+ (u∗ − Tu∗ − c) = T (u− u∗) + (u∗ − u) = (I − T )r (14)
From Eq. (10) we can observe that the linear operator GH takes as input Ψ’s update w, and tries to approximate the error e: GHw ≈ e. If the approximation were perfect: GHw = e, the iterator ΦH would converge in a single iteration. Therefore, we are trying to find some linear operator R, such that Rw = e. In fact, if we combine Eq. (13) and Eq. (14), we can observe that T (I − T )−1 is (uniquely) the linear operator we are looking for
T (I − T )−1w = e (15)
where (I − T )−1 exists because ρ(T ) < 1, so all eigenvalues of I − T must be strictly positive. Therefore, we would like our linear function GH to approximate T (I − T )−1. Note that (I − T )−1 is a dense matrix in general, meaning that it is impossible to exactly achieve GH = T (I−T )−1 with a convolutional operatorH . However, the betterGH is able to approximate T (I − T )−1, the faster our iterator converges to the solution u∗.
3.4 LINEAR DEEP NETWORKS
In our iterator design, H is a linear function parameterized by a linear deep network without nonlinearity or bias terms. Even though our objective in Eq. (12) is a non-linear function of the parameters of the deep network, this is not an issue in practice. In particular, Arora et al. (2018) observes that when modeling linear functions, deep networks can be faster to optimize with gradient descent compared to linear ones, despite non-convexity.
Even though a linear deep network can only represent a linear function, it has several advantages. On an n×n grid, each convolution layer only requires O(n2) computation and have a constant number of parameters, while a general linear function requires O(n4) computation and have O(n4) parameters. Stacking d convolution layers allows us to parameterize complex linear functions with large receptive fields, while only requiring O(dn2) computation and O(d) parameters. We experiment on two types of linear deep networks:
Conv model. We model H as a network with 3 × 3 convolutional layers without non-linearity or bias. We will refer to a model with k layers as “Convk”, e.g. Conv3 has 3 convolutional layers.
U-Net model. The Conv models suffer from the same problem as Jacobi: the receptive field grows only by 1 for each additional layer. To resolve this problem, we design the deep network counterpart of the Multigrid method. Instead of manually designing the sub-sampling / super-sampling functions, we use a U-Net architecture (Ronneberger et al., 2015) to learn them from data. Because each layer reduces the grid size by half, and the i-th layer of the U-Net only operates on (2−in)-sized grids, the total computation is only increased by a factor of
1 + 1/4 + 1/16 + · · · < 4/3
compared to a two-layer convolution. The minimal overhead provides a very large improvement of convergence speed in our experiments. We will refer to Multigrid and U-Net models with k sub-sampling layers as Multigridk and U-Netk, e.g. U-Net2 is a model with 2 sub-sampling layers.
4 EXPERIMENTS
4.1 SETTING
We evaluate our method on the 2D Poisson equation with Dirichlet boundary conditions, ∇2u = f. There exist several iterative solvers for the Poisson equation, including Jacobi, Gauss-Seidel, conjugate-gradient, and multigrid methods. We select the Jacobi method as our standard solver Ψ.
To reemphasize, our goal is to train a model on simple domains where the ground truth solutions can be easily obtained, and then evaluate its performance on different geometries and boundary conditions. Therefore, for training, we select the simplest Laplace equation, ∇2u = 0, on a square domain with boundary conditions such that each side is a random fixed value. Figure 1a shows an
example of our training domain and its ground truth solution. This setting is also used in Farimani et al. (2017) and Sharma et al. (2018).
For testing, we use larger grid sizes than training. For example, we test on 256×256 grid for a model trained on 64 × 64 grids. Moreover, we designed challenging geometries to test the generalization of our models. We test generalization on 4 different settings: (i) same geometry but larger grid, (ii) L-shape geometry, (iii) Cylinders geometry, and (iv) Poisson equation in same geometry, but f 6= 0. The two geometries are designed because the models were trained on square domains and have never seen sharp or curved boundaries. Examples of the 4 settings are shown in Figure 1.
4.2 EVALUATION
As discussed in Section 2.4, the convergence rate of any linear iterator can be determined from the spectral radius ρ(T ), which provides guarantees on convergence and convergence rate. However, a fair comparison should also consider the computation cost of H . Thus, we evaluate the convergence rate by calculating the computation cost required for the error to drop below a certain threshold.
On GPU, the Jacobi iterator and our model can both be efficiently implemented as convolutional layers. Thus, we measure the computation cost by the number of convolutional layers. On CPU, each Jacobi iteration u′i,j = 1 4 (ui−1,j + ui+1,j + ui,j−1 + ui,j+1) has 4 multiply-add operations, while a 3 × 3 convolutional kernel requires 9 operations, so we measure the computation cost by the number of multiply-add operations. This metric is biased in favor of Jacobi because there is little practical reason to implement convolutions on CPU. Nonetheless, we report both metrics in our experiments.
4.3 CONV MODEL
Table 1 shows results of the Conv model. The model is trained on a 16 × 16 square domain, and tested on 64 × 64. For all settings, our models converge to the correct solution, and require less computation than Jacobi. The best model, Conv3, is ∼ 5× faster than Jacobi in terms of layers, and ∼ 2.5× faster in terms of multiply-add operations. As discussed in Section 3.2, if our iterator converges for a geometry, then it is guaranteed to converge to the correct solution for any f and boundary values b. The experiment results show that our model not only converges but also converges faster than the standard solver, even though it is only trained on a smaller square domain.
4.4 U-NET MODEL
For the U-Net models, we compare them against Multigrid models with the same number of subsampling and smoothing layers. Therefore, our models have the same number of convolutional layers, and roughly 9/4 times the number of operations compared to Multigrid. The model is trained on a 64× 64 square domain, and tested on 256× 256. The bottom part of Table 1 shows the results of the U-Net model. Similar to the results of Conv models, our models outperforms Multigrid in all settings. Note that U-Net2 has lower computation
cost compared with Multigrid2 than U-Net3 compared to Multigrid 3. This is because Multigrid2 is a relatively worse baseline. U-Net3 still converges faster than U-Net2.
4.5 COMPARISON WITH FENICS
The FEniCS package (Logg et al., 2012) provides a collection of tools with high-level Python and C++ interfaces to solve differential equations. The open-source project is developed and maintained by a global community of scientists and software developers. Its extensive optimization over the years, including the support for parallel computation, has led to its widespread adaption in industry and academia (Alnæs et al., 2015).
We measure the wall clock time of the FEniCS model and our model, run on the same hardware. The FEniCS model is set to be the minimal residual method with algebraic multigrid preconditioner, which we measure to be the fastest compared to other methods such as Jacobi or Incomplete LU factorization preconditioner. We ignore the time it takes to set up geometry and boundary conditions, and only consider the time the solver takes to solve the problem. We set the error threshold to be 1 percent of the initial error. For the square domain, we use a quadrilateral mesh. For the L-shape and cylinder domains, however, we let FEniCS generate the mesh automatically, while ensuring the number of mesh points to be similar.
Figure 2 shows that our model is comparable or faster than FEniCS in wall clock time. These experiments are all done on CPU. Our model efficiently runs on GPU, while the fast but complex methods in FEniCS do not have efficient GPU implementations available. On GPU, we measure an additional 30× speedup (on Tesla K80 GPU, compared with a 64-core CPU).
5 RELATED WORK
Recently, there have been several works on applying deep learning to solve the Poisson equation. However, to the best of our knowledge, previous works used deep networks to directly generate the solution; they have no correctness guarantees and are not generalizable to arbitrary grid sizes and boundary conditions. Most related to our work are (Farimani et al., 2017) and (Sharma et al., 2018), which learn deep networks to output the solution of the 2D Laplace equation (a special case where f = 0). (Farimani et al., 2017) trained a U-Net model that takes in the boundary condition as a 2D image and outputs the solution. The model is trained by L1 loss to the ground truth solution and an adversarial discriminator loss. (Sharma et al., 2018) also trained a U-net model but used a weakly-supervised loss. There are other related works that solved the Poisson equation in concrete physical problems. (Tang et al., 2017) solved for electric potential in 2D/3D space; (Tompson et al., 2017) solved for pressure fields for fluid simulation; (Zhang et al., 2018) solved particle simulation of a PN Junction.
There are other works that solve other types of PDEs. For example, many studies aimed to use deep learning to accelerate and approximate fluid dynamics, governed by the Euler equation or the Navier-Stokes equations (Guo et al., 2016; Yang et al., 2016; Chu & Thuerey, 2017; Kutz, 2017). (Eismann et al., 2018) use Bayesian optimization to design shapes with reduced drag coefficients in laminar fluid flow. Other applications include solving the Schrodinger equation (Mills et al., 2017), turbulence modeling (Singh et al., 2017), and the American options and Black Scholes PDE (Sirignano & Spiliopoulos, 2018). A lot of these PDEs are nonlinear and may not have a standard linear iterative solver, which is a limitation to our current method since our model must be built on top of an existing linear solver to ensure correctness. We consider the extension to different PDEs as future work.
6 CONCLUSION
We presented a method to learn an iterative solver for PDEs that improves on an existing standard solver. The correct solution is theoretically guaranteed to be the fixed point of our iterator. We show that our model, trained on simple domains, can generalize to different grid sizes, geometries and boundary conditions. It converges correctly and achieves significant speedups compared to standard solvers, including highly optimized ones implemented in FEniCS.
A PROOFS
Theorem 1. For a linear iterator Ψ(u) = Tu+ c, Ψ converges to a unique stable fixed point from any initialization if and only if the spectral radius ρ(T ) < 1.
Proof. Suppose ρ(T ) < 1, then (I−T )−1 must exist because all eigenvalues of I−T must be strictly positive. Let u∗ = (I −T )−1c; this u∗ is a stationary point of the iterator Ψ, i.e. u∗ = Tu∗+ c. For any initialization u0, let uk = Ψk(u0). The error ek = u∗ − uk satisfies
Tek = (Tu∗ + c)− (Tuk + c) = u∗ − uk+1 = ek+1 ⇒ ek = T ke0 (16)
Since ρ(T ) < 1, we know T k → 0 as k → ∞ (LeVeque, 2007), which means the error ek → 0. Therefore, Ψ converges to u∗ from any u0.
Now suppose ρ(T ) ≥ 1. Let λ1 be the largest absolute eigenvalue where ρ(T ) = |λ1| ≥ 1, and v1 be its corresponding eigenvector. We select initialization u0 = u∗ + v1, then e0 = v1. Because |λ1| ≥ 1, we have |λk1 | ≥ 1, then T ke0 = λk1v1 6→k→∞0 However we know that under a different initialization û0 = u∗, we have ê0 = 0, so T kê0 = 0. Therefore the iteration cannot converge to the same fixed point from different initializations u0 and û0.
Proposition 1 If M is a full rank diagonal matrix, and u∗ ∈ Rn2×n2 satisfies Eq. (7), then u∗ satisfies Eq. (4).
Proof of Proposition 1. Let u∗ be a fixed point of Eq. (7) then
Gu∗ + (I −G)u∗ = G(M−1Nu∗ +M−1f) + (I −G)b
This is equivalent to (I −G)u∗ = (I −G)b
G(u∗ −M−1Nu∗ −M−1f) = 0 (17)
The latter equation is equivalent to GM−1(Au∗ − f) = 0. If M is a full rank diagonal matrix, this implies G(Au∗ − f) = 0, which is GAu∗ = Gf . Therefore, u∗ satisfies Eq.(4).
Theorem 2. For fixed G, f, b, n, the spectral norm of ΦH(u;G, f, b, n) is a convex function of H , and the set of H such that the spectral norm of ΦH(u;G, f, b, n) < 1 is a convex open set.
Proof. As before, denote Ψ(u) = Tu+ c. Observe that
ΦH(u;G, f, b, n) = Tu+ c+GH(Tu+ c− u) = (T +GHT −GH)u+GHc+ c (18)
The spectral norm ‖·‖2 is convex with respect to its argument, and (T + GHT − GH) is linear in H . Thus, ‖T + GHT − GH‖2 is convex in H as well. Thus, under the condition that ‖T + GHT −GH‖2 < 1, the set of H must be convex because it is a sub-level set of the convex function ‖T +GHT −GH‖2. To prove that it is open, observe that ‖·‖2 is a continuous function, so ‖T + GHT − GH‖2 is a continuous map from H to the spectral radius of ΦH . If we consider the set of H such that ‖T +GHT −GH‖2 < 1, this set is the preimage of (− , 1) for any > 0. As (− , 1) is open, its preimage must be open.
Proposition 2. For fixed A,G, n and fixed H , if for some f0, b0, ΦH(u;G, f0, b0, n) is valid for the PDE problem (A,G, f0, b0, n), then for all f and b, the iterator ΦH(u;G, f, b, n) is valid for the PDE problem (A,G, f, b, n).
Proof. From Theorem 1 and Lemma 1, our iterator is valid if and only if ρ(T +GHT −GH) < 1. The iterator T + GHT − GH only depends on A,G, and is independent of the constant c in Eq. (18). Thus, the validity of the iterator is independent with f and b. Thus, if the iterator is valid for some f0 and b0, then it is valid for any choice of f and b.
B PROOF OF CONVERGENCE OF JACOBI METHOD
In Section 2.4.1, we show that for Poisson equation, the update matrix T = G(I − A). We now formally prove that ρ(G(I −A)) < 1 for any G. For any matrix T , the spectral radius is bounded by the spectral norm: ρ(T ) ≤ ‖T‖2, and the equality holds if T is symmetric. Since (I − A) is a symmetric matrix, ρ(I − A) = ‖I − A‖2. It has been proven that ρ(I − A) < 1 (Frankel, 1950). Moreover, ‖G‖2 = 1. Finally, matrix norms are sub-multiplicative, so
ρ(T ) ≤ ‖G(I −A)‖2 ≤ ‖G‖2‖I −A‖2 < 1 (19)
ρ(T ) < 1 is true for anyG. Thus, the standard Jacobi method is valid for the Poisson equation under any geometry. | 1. What is the focus and contribution of the paper on improving problem-tailored PDE solvers?
2. What are the strengths of the proposed method, particularly in terms of its simplicity and performance improvement?
3. What are the weaknesses of the method regarding its reliance on linearity and limitation to one- or two-dimensional problems?
4. How does the reviewer assess the clarity and quality of the paper's content?
5. Are there any comparisons with other deep learning approaches to PDE solving in the experiments?
6. Can the linear iterator T be chosen optimally for a given PDE and boundary conditions? If so, how?
7. Why is the deep network parameterization needed, and how does it relate to the rank of H?
8. Is there any connection between the proposed accelerated update and second-order coordinate descent methods like Newton or quasi-Newton? | Review | Review
Summary:
The authors propose a method to learn and improve problem-tailored PDE solvers from existing ones. The linear updates of the target solver, specified by the problem's geometry and boundary conditions, are computed from the updates of a well-known solver through an optimized linear map. The obtained solver is guaranteed to converge to the correct solution and
achieves a considerable speed-up compared to solvers obtained from alternative state-of-the-art methods.
Strengths:
Solving PDEs is an important and hard problem and the proposed method seems to consistently outperform the state of the art. I ve liked the idea of learning a speed-up operator to improve the performance of a standard solver and adapt it to new boundary conditions or problem geometries. The approach is simple enough to allow a straightforward proof of correctness.
Weaknesses:
The method seems to rely strongly on the linearity of the solver and its deformation (to guarantee the correctness of the solution). The operator H is a matrix of finite dimensions and it is not completely clear to me what is the role of the multi-layer parameterization. Based on a grid approach, the idea applies only to one- or two-dimensional problems.
Questions:
- in the introduction, what does it mean that generic solvers are effective 'but could be far from optimal'? Does this refer to the convergence speed or to the correctness of the solution?
- other deep learning approaches to PDE solving are mentioned in the introduction. Is the proposed method compared to them somewhere in the experiments?
- given a PDE and some boundary conditions, is there any known method to choose the liner iterator T optimally? For example, since u* is the solution of a linear system, could one choose the updates to be the gradient descent updates of a least-squares objective such as || A u - f||^2?
- why is the deep network parameterization needed? Since no nonlinearities are present, isn t this equivalent to fix the rank of H?
- given the ` interpretation of H' sketched in Section 3.3, is there any relationship between the proposed accelerated update and the update of second-order coordinated descent methods (like Newton or quasi-Newton)? |
ICLR | Title
Generalization and Estimation Error Bounds for Model-based Neural Networks
Abstract
Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.
1 INTRODUCTION
Model-based neural networks provide unprecedented performance gains for solving sparse coding problems, such as the learned iterative shrinkage and thresholding algorithm (ISTA) (Gregor & LeCun, 2010) and learned alternating direction method of multipliers (ADMM) (Boyd et al., 2011). In practice, these approaches outperform feed-forward neural networks with ReLU nonlinearities.
These neural networks are usually obtained from algorithm unrolling (or unfolding) techniques, which were first proposed by Gregor and LeCun (Gregor & LeCun, 2010), to connect iterative algorithms to neural network architectures. The trained networks can potentially shed light on the problem being solved. For ISTA networks, each layer represents an iteration of a gradient-descent procedure. As a result, the output of each layer is a valid reconstruction of the target vector, and we expect the reconstructions to improve with the network’s depth. These networks capture original problem structure, which translates in practice to a lower number of required training data (Monga et al., 2021). Moreover, the generalization abilities of model-based networks tend to improve over regular feed-forward neural networks (Behboodi et al., 2020; Schnoor et al., 2021).
Understanding the generalization of deep learning algorithms has become an important open question. The generalization error of machine learning models measures the ability of a class of estimators to generalize from training to unseen samples, and avoid overfitting the training (Jakubovitz et al., 2019). Surprisingly, various deep neural networks exhibit high generalization abilities, even for increasing networks’ complexities (Neyshabur et al., 2015b; Belkin et al., 2019). Classical machine learning
∗This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research, the innovation programme (grant agreement No. 101000967), and the Israel Science Foundation under Grant 536/22. Y. C. Eldar and M. R. D. Rodrigues are supported by The Weizmann-UK Making Connections Programme (Ref. 129589). M. R. D. Rodrigues is also supported by the Alan Turing Institute. The authors wish to thank Dr. Gholamali Aminian from the Alan Turing Institute, UK, for his contribution to the proofs’ correctness.
measures such as the Vapnik-Chervonenkis (VC) dimension (Vapnik & Chervonenkis, 1991) and Rademacher complexity (RC) (Bartlett & Mendelson, 2002), predict an increasing generalization error (GE) with the increase of the models’ complexity, and fail to explain the improved generalization observed in experiments. More advanced measures consider the training process and result in tighter bounds on the estimation error (EE), were proposed to investigate this gap, such as the local Rademacher complexity (LRC) (Bartlett et al., 2005). To date, the EE of model based networks using these complexity measures has not been investigated to the best of our knowledge.
1.1 OUR CONTRIBUTIONS
In this work, we leverage existing complexity measures such as the RC and LRC, in order to bound the generalization and estimation errors of learned ISTA and learned ADMM networks.
• We provide new bounds on the GE of ISTA and ADMM networks, showing that the GE of model-based networks is lower than that of the common ReLU networks. The derivation of the theoretical guarantees combines existing proof techniques for computing the generalization error of multilayer networks with new methodology for bounding the RC of the soft-thresholding operator, that allows a better understanding of the generalization ability of model based networks.
• The obtained bounds translate to practical design rules for model-based networks which guarantee high generalization. In particular, we show that a nonincreasing GE as a function of the network’s depth is achievable, by limiting the weights’ norm in the network. This improves over existing bounds, which exhibit a logarithmic increase of the GE with depth (Schnoor et al., 2021). The GE bounds of the model-based networks suggest that under similar restrictions, learned ISTA networks generalize better than learned ADMM networks.
• We also exploit the LRC machinery to derive bounds on the EE of feed-forward networks, such as ReLU, ISTA, and ADMM networks. The EE bounds depend on the data distribution and training loss. We show that the model-based networks achieve lower EE bounds compared to ReLU networks.
• We focus on the differences between ISTA and ReLU networks, in term of performance and generalization. This is done through a series of experiments for sparse vector recovery problems. The experiments indicate that the generalization abilities of ISTA networks are controlled by the soft-threshold value. For a proper choice of parameters, ISTA achieves lower EE along with more accurate recovery. The dependency of the EE as a function of λ and the number of training samples can be explained by the derived EE bounds.
1.2 RELATED WORK
Understanding the GE and EE of general deep learning algorithms is an active area of research. A few approaches were proposed, which include considering networks of weights matrices with bounded norms (including spectral and L2,1 norms) (Bartlett et al., 2017; Sokolić et al., 2017), and analyzing the effect of multiple regularizations employed in deep learning, such as weight decay, early stopping, or drop-outs, on the generalization abilities (Neyshabur et al., 2015a; Gao & Zhou, 2016; Amjad et al., 2021). Additional works consider global properties of the networks, such as a bound on the product of all Frobenius norms of the weight matrices in the network (Golowich et al., 2018). However, these available bounds do not capture the GE behaviour as a function of network depth, where an increase in depth typically results in improved generalization. This also applies to the bounds on the GE of ReLU networks, detailed in Section 2.3.
Recently, a few works focused on bounding the GE specifically for deep iterative recovery algorithms (Behboodi et al., 2020; Schnoor et al., 2021). They focus on a broad class of unfolded networks for sparse recovery, and provide bounds which scale logarithmically with the number of layers (Schnoor et al., 2021). However, these bounds still do not capture the behaviours experienced in practice.
Much work has also focused on incorporating the networks’ training process into the bounds. The LRC framework due to Bartlett, Bousquet, and Mendelson (Bartlett et al., 2005) assumes that the training process results in a smaller class of estimation functions, such that the distance between the estimator in the class and the empirical risk minimizer (ERM) is bounded. An additional related framework is the effective dimensionality due to Zhang (Zhang, 2002). These frameworks result in
different bounds, which relate the EE to the distance between the estimators. These local complexity measures were not applied to model-based neural networks.
Throughout the paper we use boldface lowercase and uppercase letters to denote vectors and matrices respectively. The L1 and L2 norms of a vector x are written as ∥x∥1 and ∥x∥2 respectively, and the L∞ (which corresponds to the maximal L1 norm over the matrix’s rows) and spectral norms of a matrix X , are denoted by ∥X∥∞ and ∥X∥σ respectively. We denote the transpose operation by (·)T . For any function f and class of functions H, we define f ◦ H = {x 7→ f ◦ h(x) : h ∈ H} .
2 PRELIMINARIES
2.1 NETWORK ARCHITECTURE
We focus on model-based networks for sparse vector recovery, applicable to the linear inverse problem
y = Ax+ e (1)
where y ∈ Rny is the observation vector with ny entries, x ∈ Rnx is the target vector with nx entries, with nx > ny , A ∈ Rny×nx is the linear operator, and e ∈ Rny is additive noise. The target vectors are sparse with sparsity rate ρ, such that at most ⌊ρnx⌋ entries are nonzero. The inverse problem consists of recovering the target vector x, from the observation vector y.
Given that the target vector is assumed to be sparse, recovering x from y in (1) can be formulated as an optimization problem, such as least absolute shrinkage and selection operator (LASSO) (Tibshirani & Ryan, 2013), that can be solved with well-known iterative methods including ISTA and ADMM. To address more complex problems, such as an unknown linear mapping A, and to avoid having to fine tune parameters, these algorithms can be mapped into model-based neural networks using unfolding or unrolling techniques (Gregor & LeCun, 2010; Boyd et al., 2011; Monga et al., 2021; Yang et al., 2018). The network’s architecture imitates the original iterative method’s functionality and enables to learn the models’ parameters with respect to a set of training examples.
We consider neural networks with L layers (referred to as the network’s depth), which corresponds to the number of iterations in the original iterative algorithm. The layer outputs of an unfolded ISTA network hlI , l ∈ [1, L], are defined by the following recurrence relation, shown in Fig. 1:
hlI = Sλ ( W lhl−1I + b ) , h0I = Sλ(b) (2)
where W l ∈ Rnx×nx , l ∈ [1, L] are the weights matrices corresponding to each of the layers, with bounded norms ||W l||∞ ≤ Bl. We further assume that the L2 norm of W 1 is bounded by B1. The vector b = ATy is a constant bias term that depends on the observation y, where we assume that the initial values are bounded, such that ||h0I ||1 ≤ B0. In addition, Sλ(·) is the elementwise soft-thresholding operator Sλ(h) = sign(h)max(|h| − λ,0) (3) where the functions sign(·) and max(·) are applied elementwise, and 0 is a vector of zeros. As Sλ(·) is an elementwise function it preserves the input’s dimension, and can be applied on scalar or vector inputs. The network’s prediction is given by the last layer in the network x̂ = hLI (y). We note that the estimators are functions mapping y to x̂, hLI : Rny −→ Rnx , characterized by the weights, i.e. hLI = h L I ({W l}Ll=1). The class of functions representing the output at depth L in an ISTA network, is HLI = {hLI ({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. Similarly, the lth layer of unfolded ADMM is defined by the following recurrence relation
hlA = W l ( zl−1 + ul−1 ) + b
zl = Sλ ( hlA − ul−1 ) , z0 = 0
ul = ul−1 − γ ( hlA − zl ) , u0 = 0
(4)
where 0 is a vector of zeros, b = ATy is a constant bias term, and γ > 0 is the step size derived by the original ADMM algorithm, as shown in Fig. 1. The estimators satisfy hLA : Rny −→ Rnx , and the class of functions representing the output at depth L in an ADMM network, is HLA = {hLA({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}, where we impose the same assumptions on the weights matrices as ISTA networks.
For a depth-L ISTA or ADMM network, the learnable parameters are the weight matrices {W l}Ll=1. The weights are learnt by minimizing a loss function L on a set of m training examples S = {(xi,yi)}mi=1, drawn from an unknown distribution D, consistent with the model in (1). We consider the case where the per-example loss function is obtained by averaging over the example per-coordinate losses:
L (h(y),x) = 1 nx nx∑ j=1 ℓ (hj(y),xj) (5)
where hj(y) and xj denote the jth coordinate of the estimated and true targets, and ℓ : R×R −→ R+ is 1-Lipschitz in its first argument. This requirement is satisfied in many practical settings, for example with the p-power of Lp norms, and is also required in a related work (Xu et al., 2016). The loss of an estimator h ∈ H which measures the difference between the true value x and the estimation h(y), is denoted for convenience by L(h) = L (h(y),x). There exists additional forms of learned ISTA and ADMM networks, which include learning an additional set of weight matrices affecting the bias terms (Monga et al., 2021). Also, the optimal value of λ generally depends on the target vector sparsity level. Note however that for learned networks, the value of λ at each layer can also be learned. However, here we focus on a more basic architecture with fixed λ in order to draw theoretical conclusions.
2.2 GENERALIZATION AND ESTIMATION ERRORS
In this work, we focus on upper bounding the GE and EE of the model-based neural networks of Fig. 1. The GE of a class of estimation functions h ∈ H, such that h : Rny −→ Rnx , is defined as
G(H) = ES sup h∈H LD(h)− LS(h) (6)
where LD(h) = EDL(h) is the expected loss with respect to the data distribution D (the joint probability distribution of the targets and observations), LS(h) = 1m ∑m i=1 L(h(yi),xi) is the average empirical loss with respect to the training set S, and ES is the expectation over the training datasets. The GE is a global property of the class of estimators, which captures how the class of estimators H is suited to the learning problem. Large GE implies that there are hypotheses in H for which LD deviates much from LS , on average over S. However, the GE in (6) does not capture how the learning algorithm chooses the estimator h ∈ H. In order to capture the effect of the learning algorithm, we consider local properties of the class of estimators, and focus on bounding the estimation error (EE)
E(H) = LD ( ĥ ) − inf
h∈H LD (h) (7)
where ĥ is the ERM satisfying LS(ĥ) = infh∈H LS(h). We note that the ERM approximates the learned estimator hlearned, which is obtained by training the network on a set of training examples S, using algorithms such as SGD. However, the estimator hlearned depends on the optimization algorithm, and can differ from the ERM. The difference between the empirical loss associated with ĥ and the empirical loss associated with hlearned is usually referred to as the optimization error.
Common deep neural network architectures have large GE compared to the low EE achieved in practice. This still holds, when the networks are not trained with explicit regularization, such as weight decay, early stopping, or drop-outs (Srivastava et al., 2014; Neyshabur et al., 2015b). This empirical phenomena is experienced across various architectures and hyper-parameter choices (Liang et al., 2019; Novak et al., 2018; Lee et al., 2018; Neyshabur et al., 2018).
2.3 RADEMACHER COMPLEXITY BASED BOUNDS
The RC is a standard tool which captures the ability of a class of functions to approximate noise, where a lower complexity indicates a reduced generalization error. Formally, the empirical RC of a class of scalar estimators H, such that h : Rny −→ R for h ∈ H, over m samples is
Rm (H) = E{ϵi}mi=1 sup h∈H
1
m m∑ i=1 ϵih(yi) (8)
where {ϵi}mi=1 are independent Rademacher random variables for which Pr (ϵi = 1) = Pr (ϵi = −1) = 1/2, the samples {yi}mi=1 are obtained by the model in (1) from m i.i.d. target vectors {xi}mi=1 drawn from an unknown distribution. Taking the expectation of the RC of L ◦H with respect to the set of examples S presented in Section 2.1, leads to a bound on the GE
G(H) ≤ 2ESRm (L ◦H) (9) where L is defined in Section 2.1 and L ◦H = {(x,y) 7→ L(h(y),x) : h ∈ H} (Shalev-Shwartz & Ben-David, 2014). We observe that the class of functions L◦H consists of scalar functions, such that f : Rnx × Rnx −→ R for f ∈ L ◦H. Therefore, in order to bound the GE of the class of functions defined by ISTA and ADMM networks, we can first bound their RC.
Throughout the paper, we compare the model-based architectures with a feed forward network with ReLU activations, given by ReLU(h) = max (h,0). In this section, we review existing bounds for the generalization error of these networks. The layers of a ReLU network hlR, ∀l ∈ [1, L], are defined by the following recurrence relation hlR = ReLU ( W lhl−1R + b ) , and h0R = ReLU(b), where W l, ∀l ∈ [1, L] are the weight matrices which satisfy the same conditions as the weight matrices of ISTA and ADMM networks. The class of functions representing the output at depth L in a ReLU network, is HLR = {hLR({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. This architecture leads to the following bound on the GE. Theorem 1 (Generalization error bound for ReLU networks (Gao & Zhou, 2016)). Consider the class of feed forward networks of depth-L with ReLU activations, HLR, as described in Section 2.3, and m i.i.d. training samples. Given a 1-Lipschitz loss function, its GE satisfies G ( HLR ) ≤ 2GlR,
where GlR = B0 ∏L l=1 Bl√ m .
Proof. Follows from applying the bound of the RC of ReLU neural networks from (Gao & Zhou, 2016) and combining it with (9).
The bound in Theorem 1 is satisfied for any feed forward network with 1-Lipschitz nonlinear activations (including ReLU), and can be generalized for networks with activations with different Lipshcitz constants. We show in Theorem 2, that the bound presented in Theorem 1 cannot be substantially improved for ReLU networks with the RC framework. Theorem 2 (Lower Rademacher complexity bound for ReLU networks (Bartlett et al., 2017)). Consider the class of feed forward networks of depth-L with ReLU activations, where the weight matrices have bounded spectral norm ||W l||σ ≤ B′l, l ∈ [1, L]. The dimension of the output layer is 1, and the dimension of each non-output layer is at least 2. Given m i.i.d. training samples, there exists a c such that Rm ( H′,LR ) ≥ cB0 ∏L l=1B ′ l , where H ′,L R = {hLR({W l}Ll=1) : ||W l||σ ≤ B′l l ∈ [1, L]}.
This result shows that using the RC framework the GE of ReLU networks behaves as the product of the weight matrices’ norms ∏L l=1Bl, as captured in Theorem 1. Theorem 2, implies that the dependence on the weight matrices’ norms, cannot be substantially improved with the RC framework for ReLU networks.
3 GENERALIZATION ERROR BOUNDS: GLOBAL PROPERTIES
In this section, we derive theoretical bounds on the GE of learned ISTA and ADMM networks. From these bounds we deduce design rules to construct ISTA and ADMM networks with a GE which does not increase exponentially with the number of layers. We start by presenting theoretical guarantees on the RC of any class of functions, after applying the soft-thresholding operation.
Soft-thresholding is a basic block that appears in multiple iterative algorithms, and therefore is used as the nonlinear activation in many model-based networks. It results from the proximal gradient of the L1 norm (Palomar & Eldar, 2010). We therefore start by presenting the following lemma which expresses how the RC of a class of functions is affected by applying soft-thresholding to each function in the class. The proof is provided in the supplementary material. Lemma 1 (Rademacher complexity of soft-thresholding). Given any class of scalar functions H where h : Rn −→ R, h ∈ H for any integer n ≥ 1, and m i.i.d. training samples,
Rm (Sλ ◦ H) ≤ Rm (H)− λT
m , (10) where T = ∑m i=1 Ti and Tj = E{ϵi}mi=2 ( 1h⋆(yj)>λ∧h′⋆(yj)<−λ ) such that h∗, h′∗ =
argmaxh,h′∈H ( h(y1)− h′(y1)− 2λ1h(y1)>λ∧h′(y1)<−λ + ∑m i=2 ϵi(Sλ(h(yi)) + Sλ(h′(yi))) ) .
The quantity T is a non-negative value obtained during the proof, which depends on the networks’ number of layers, underlying data distribution D and soft-threshold value λ. As seen from (10), the value of T dictates the reduction in RC due to soft-thresholding, where a reduction in the RC can also be expected. The value of T increases as λ decreases. In the case that λT increases with λ, higher values of λ further reduce the RC of the class of functions H, due to the soft-thresholding. We now focus on the class of functions representing the output of a neuron at depth L in an ISTA network, HLI . In the following theorem, we bound its GE using the RC framework and Lemma 1. The proof is provided in the supplementary material. Theorem 3 (Generalization error bound for ISTA networks). Consider the class of learned ISTA networks of depth L as described in (2), and m i.i.d. training samples. Then there exist T (l) for
l ∈ [1, L] in the range T (l) ∈ [ 0,min { mBlG l−1 I λ ,m }] such that G ( HLI ) ≤ 2ESGLI , where
GlI = B0∏ll′=1Bl′√ m − λ m l−1∑ l′=1 T (l ′) l∏ j=l′+1 Bj − λT (l) m . (11) Next, we show that for a specific distribution, the expected value of T (l) is greater than 0. Under an additional bound on the expectation value of the estimators (specified in the supplementary material)
ES ( T (l) ) ≥ max { m ( 1− 2e−(cBlb (l)−λ) ) , 0 }
(12)
where b(l+1) = Blb(l)−λ, b(1) = B0, and c ∈ (0, 1]. Increasing λ or decreasing B, will decrease the bound in (12), since crossing the threshold is less probable. Depending on λ and {Bl}Ll=1 (specifically that λ ≤ cBlb(l) + ln 2), the bound in (12) is positive, and enforces a non-zero reduction in the GE. Along with Theorem 3, this shows the expected reduction in GE of ISTA networks compared to ReLU netowrks. The reduction is controlled by the value of the soft threshold.
To obtain a more compact relation, we can choose the maximal matrices’ norm B = maxl∈[1,L]Bl, and denote T = minl∈[1,L] T (l) ∈ [0,m] which leads to G ( HLI ) ≤ 2 ( B0B L √ m − λES(T )m BL−1 B−1 ) .
Comparing this bound with the GE bound for ReLU networks presented in Theorem 1, shows the expected reduction due to the soft thresholding activation. This result also implies practical rules for designing low generalization error ISTA networks. We note that the network’s parameters such as the soft-threshold value λ and number of samples m, are predefined by the model being solved (for example, in ISTA, the value of λ is chosen according to the singular values of A).
We derive an implicit design rule from (3), for a nonincreasing GE, as detailed in Section A.2. This is done by restricting the matrices’ norm to satisfy B ≤ 1 + λES(T )√
mB0 . Moreover, these results can
be extended to convolutional neural networks. As convolution operations can be expressed via multiplication with a convolution matrix, the presented results are also satisfied in that case.
Similarly, we bound the GE of the class of functions representing the output at depth L in an ADMM network, HLA. The proof and discussion are provided in the supplementary material. Theorem 4 (Generalization error bound for ADMM networks). Consider the class of learned ADMM networks of depth L as described in (4), and m i.i.d. training samples. Then there exist
T (l) for l ∈ [1, L− 1] in the interval T (l) ∈ [ 0,min { mB̃lG l−1 A λ̃ ,m }] where GlA = B0 ∏l−1 l′=1 B̃l′√ m
− λ̃ m ∑l−2 l′=1 T (l′) ∏l−1 j=l′+1 B̃j − λ̃T (l−1)
m , and λ̃ = (1 + γ)λ, B̃l = (1 + 2γ)(Bl + 2), l ∈ [1, L], such that G ( HLA ) ≤ 2B̃LESGL−1A .
We compare the GE bounds for ISTA and ADMM networks, to the bound on the GE of ReLU networks presented in Theorem 1. We observe that both model-based networks achieve a reduction in the GE, which depends on the soft-threshold, the underlying data distribution, and the bound on the norm of the weight matrices. Following the bound, we observe that the soft-thresholding nonlinearity is most valuable in the case of small number of training samples. The soft-thresholding nonlinearity is the key that enables reducing the GE of the ISTA and ADMM networks compared to ReLU networks. Next, we focus on bounding the EE of feed-forward networks based on the LRC framework.
4 ESTIMATION ERROR BOUNDS: LOCAL PROPERTIES
To investigate the model-based networks’ EE, we use the LRC framework (Bartlett et al., 2005). Instead of considering the entire class of functions H, the LRC considers only estimators which are close to the optimal estimator Hr = { h ∈ H : ED ∥h(y)− h∗(y)∥22 ≤ r } , where h∗ is such that LD(h ∗) = infh∈H LD(h). It is interesting to note that the class of estimators Hr only restricts the distance between the estimators themselves, and not between their corresponding losses. Following the LRC framework, we consider target vectors ranging in [−1, 1]nx . Therefore, we adapt the networks’ estimations by clipping them to lie in the interval [−1, 1]nx . In our case we consider the restricted classes of functions representing the output of a neuron at depth L in ISTA, ADMM, and ReLU networks. Moreover, we denote by W l and W l,∗, l ∈ [1, L] the weight matrices corresponding to h ∈ Hr and h∗, respectively. Based on these restricted class of functions, we present the following assumption and theorem (the proof is provided in the supplementary material). Assumption 1. There exists a constant C ≥ 1 such that for every probability distribution D, and estimator h ∈ H, ED ∑nx j=1(hj − h∗j )2 ≤ CED ∑nx j=1 ( ℓ(hj)− ℓ(h∗j ) ) , where hj and h∗j denote the jth coordinate of the estimators.
As pointed out in (Bartlett & Mendelson, 2002), this condition usually follows from a uniform convexity condition on the loss function ℓ. For instance, if |h(y)− x| ≤ 1 for any h ∈ H,y ∈ Rny and x ∈ Rnx , then the condition is satisfied with C = 1 (Yousefi et al., 2018). Theorem 5 (Estimation error bound of ISTA, ADMM, and ReLU networks). Consider the class of functions represented by depth-L ISTA networks HLI as detailed in Section 2.1, m i.i.d training samples, and a per-coordinate loss satisfying Assumption 1 with a constant C. Let ||W l − W l,∗||∞ ≤ α √ r for some α > 0. Moreover, B ≥ max{α √ r, 1}. Then there exists
T in the interval T ∈ [ 0,min {√ mB0B L−12L λη ,m }] , where η = LB L−1(B−1)−BL+1 (B−1)2 , such that for any s > 0 with probability at least 1− e−s,
E ( HLI ) ≤ 41r∗ + 17C 2 + 48C
mnx s (13)
where r∗ = C2α2 (
B0B L−12L√ m − λTm η )2 . The bound is also satisfied for the class of functions
represented by depth-L ADMM networks HLA, with r∗ = C2α2 ( B0B̃ L−22L−1√ m − λ̃Tm η̃ )2 , where λ̃ = (1 + γ)λ, B̃ = (1 + 2γ)(B + 2), and η̃ = (L−1)B̃ L−2(B̃−1)−B̃L−1+1
(B̃−1)2 , and for the class of functions represented by depth-L ReLU networks HLR, with r∗ = C2α2 ( B0B L−12L√ m )2 .
From Theorem 5, we observe that the EE decreases by a factor of O (1/m), instead of a factor of O (1/ √ m) obtained for the GE. This result complies with previous local bounds which yield faster convergence rates compared to global bounds (Blanchard et al., 2007; Bartlett et al., 2005). Also, the value of α relates the maximal distance between estimators in Hr denoted by r, to the distance between their corresponding weight matrices ||W l −W l,∗||∞ ≤ α √ r, l ∈ [1, L]. Tighter bounds on the distance between the weight matrices allow us to choose a smaller value for α, resulting in smaller values r∗ which improve the EE bounds. The value of α could depend on the network’s nonlinearity, underlying data distribution D, and number of layers. Note that the bounds of the model-based architectures depend on the soft-thresholding through the value of λES(T ). As λES(T ) increases, the bound on the EE decreases, which emphasizes the nonlinearity’s role in the network’s generalization abilities. Due to the soft-thresholding, ISTA and ADMM networks result in lower EE bounds compared to the bound for ReLU networks. It is interesting to note, that as the number of training samples m increases, the difference between the bounds on the model-based and ReLU networks is less significant. In the EE bounds of model-based networks, the parameter B0 relates the bound to the sparsity level ρ, of the target vectors. Lower values of ρ result in lower EE bounds, as demonstrated in the supplementary.
5 NUMERICAL EXPERIMENTS
In this section, we present a series of experiments that concentrate on how a particular model-based network (ISTA network) compares to a ReLU network, and showcase the merits of model-based networks. We focus on networks with 10 layers (similar to previous works (Gregor & LeCun, 2010)), to represent realistic model-based network architectures. The networks are trained on a simulated dataset to solve the problem in (1), with target vectors uniformly distributed in [−1, 1]. The linear mapping A is constructed from the real part of discrete Fourier transform (DFT) matrix rows (Ong et al., 2019), where the rows are randomly chosen. The sparsity rate is ρ = 0.15, and the noise’s standard deviation is 0.1. To train the networks we used the SGD optimizer with the L1 loss over all neurons of the last layer. The target and noise vectors are generated as element wise independently from a uniform distribution ranging in [−1, 1] . All results are reproducible through (Authors, 2022) which provides the complete code to execute the experiments presented in this section.
We concentrate on comparing the networks’ EE, since in practice, the networks are trained with a finite number of examples. In order to empirically approximate the EE of a class of networks H, we use an empirical approximation of h∗ (which satisfy LD(h∗) = infh∈H LD(h)) and the ERM ĥ, denoted by h∗emp and ĥemp respectively. The estimator h ∗ emp results from the trained network with 104 samples, and the ERM is approximated by a network trained using SGD with m training samples (where m ≤ 104). The empirical EE is given by their difference LD(ĥemp)− LD(h∗emp). In Fig. 2, we compare between the ISTA and ReLU networks, in terms of EE and L1 loss, for networks trained with different number of samples (between 10 and 104 samples). We observe that for small number of training samples, the ISTA network substantially reduces the EE compared to the ReLU network. This can be understood from Theorem 5, which results in lower EE bounds on the ISTA networks compared to ReLU networks, due to the term λES(T ). However, for large number of samples the EE of both networks decreases to zero, which is also expected from Theorem 5. This highlights that the contribution of the soft-thresholding nonlinearity to the generalization abilities of the network is more significant for small number of training samples. Throughout the paper, we considered networks with constant bias terms. In this section, we also consider learned bias terms, as detailed in the supplementary material. In Fig. 2, we present the experimental results for networks with constant and learned biases. The experiments indicate that the choice of constant or learned bias is less significant to the EE or the accuracy, compared to the choice of nonlinearity, emphasizing the relevance of the theoretical guarantees. The cases of learned and constant biases have different optimal estimators, as the networks with learned biases have more learned parameters. As a result, it is plausible a network with more learnable parameters (the learnable bias) exhibits a lower estimation error since the corresponding optimal estimator also exhibits a lower estimation error. To analyze the effect of the soft-thresholding value on the generalization abilities of the ISTA network, we show in Fig. 3, the empirical EE for multiple values of λ. The experimental results demonstrate that for small number of samples, increasing λ reduces the EE. As expected from the EE bounds, for a large number of training samples, this dependency on the nonlinearity vanishes, and the EE is similar for all values
of λ. In Fig. 3b, we show the L1 loss of the ISTA networks for different values of λ. We observe that low estimation error does not necessarily lead to low loss value. For m ≲ 100, increasing λ reduced the EE. These results suggest that given ISTA networks with different values of λ such that all networks achieve similar accuracy, the networks with higher values of λ provide lower EE. These results are also valid for additional networks’ depths. In Section B, we compare the EE for networks with different number of layers, and show that they exhibit a similar behaviour.
6 CONCLUSION
We derived new GE and EE bounds for ISTA and ADMM networks, based on the RC and LRC frameworks. Under suitable conditions, the model-based networks’ GE is nonincreasing with depth, resulting in a substantial improvement compared to the GE bound of ReLU networks. The EE bounds explain EE behaviours experienced in practice, such as ISTA networks demonstrating higher estimation abilities, compared to ReLU networks, especially for small number of training samples. Through a series of experiments, we show that the generalization abilities of ISTA networks are controlled by the soft-threshold value, and achieve lower EE along with a more accurate recovery compared to ReLU networks which increase the GE and EE with the networks’ depths.
It is interesting to consider how the theoretical insights can be harnessed to enforce neural networks with high generalization abilities. One approach is to introduce an additional regularizer during the training process that is rooted in the LRC, penalizing networks with high EE (Yang et al., 2019). | 1. What are the key contributions and findings of the paper regarding model-based neural networks?
2. How do the generalization properties of model-based networks compare to those of regular ReLU networks?
3. What are the strengths and weaknesses of the paper's theoretical analysis and simulation results?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work studies the generalisation properties of model-based neural networks which can achieve unparalleled performance on sparse coding and compressed sensing problems. In particular, the authors leverage complexity measures including the global and local Rademacher complexities to provide upper bounds on the generalisation and estimation errors of model-based networks. They show that the generalisation abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
Strengths And Weaknesses
Strength
I am not an expert on model-based neural network, but from my perspective, I think this work adopts a suitable theoretical tools to solve an important problem.
This writing is quite good. It is eazy to follow the main idea of this work.
Weakness
Though the authors provide simulations to verify the proposed theory, it is still not clear how realistic the theory is and what is the gap between the considered scenario and the realistic settings.
Clarity, Quality, Novelty And Reproducibility
The paper is very clear. Very organized and enjoyable to read.
The paper is of high quality.
The paper is quite novel to me. |
ICLR | Title
Generalization and Estimation Error Bounds for Model-based Neural Networks
Abstract
Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.
1 INTRODUCTION
Model-based neural networks provide unprecedented performance gains for solving sparse coding problems, such as the learned iterative shrinkage and thresholding algorithm (ISTA) (Gregor & LeCun, 2010) and learned alternating direction method of multipliers (ADMM) (Boyd et al., 2011). In practice, these approaches outperform feed-forward neural networks with ReLU nonlinearities.
These neural networks are usually obtained from algorithm unrolling (or unfolding) techniques, which were first proposed by Gregor and LeCun (Gregor & LeCun, 2010), to connect iterative algorithms to neural network architectures. The trained networks can potentially shed light on the problem being solved. For ISTA networks, each layer represents an iteration of a gradient-descent procedure. As a result, the output of each layer is a valid reconstruction of the target vector, and we expect the reconstructions to improve with the network’s depth. These networks capture original problem structure, which translates in practice to a lower number of required training data (Monga et al., 2021). Moreover, the generalization abilities of model-based networks tend to improve over regular feed-forward neural networks (Behboodi et al., 2020; Schnoor et al., 2021).
Understanding the generalization of deep learning algorithms has become an important open question. The generalization error of machine learning models measures the ability of a class of estimators to generalize from training to unseen samples, and avoid overfitting the training (Jakubovitz et al., 2019). Surprisingly, various deep neural networks exhibit high generalization abilities, even for increasing networks’ complexities (Neyshabur et al., 2015b; Belkin et al., 2019). Classical machine learning
∗This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research, the innovation programme (grant agreement No. 101000967), and the Israel Science Foundation under Grant 536/22. Y. C. Eldar and M. R. D. Rodrigues are supported by The Weizmann-UK Making Connections Programme (Ref. 129589). M. R. D. Rodrigues is also supported by the Alan Turing Institute. The authors wish to thank Dr. Gholamali Aminian from the Alan Turing Institute, UK, for his contribution to the proofs’ correctness.
measures such as the Vapnik-Chervonenkis (VC) dimension (Vapnik & Chervonenkis, 1991) and Rademacher complexity (RC) (Bartlett & Mendelson, 2002), predict an increasing generalization error (GE) with the increase of the models’ complexity, and fail to explain the improved generalization observed in experiments. More advanced measures consider the training process and result in tighter bounds on the estimation error (EE), were proposed to investigate this gap, such as the local Rademacher complexity (LRC) (Bartlett et al., 2005). To date, the EE of model based networks using these complexity measures has not been investigated to the best of our knowledge.
1.1 OUR CONTRIBUTIONS
In this work, we leverage existing complexity measures such as the RC and LRC, in order to bound the generalization and estimation errors of learned ISTA and learned ADMM networks.
• We provide new bounds on the GE of ISTA and ADMM networks, showing that the GE of model-based networks is lower than that of the common ReLU networks. The derivation of the theoretical guarantees combines existing proof techniques for computing the generalization error of multilayer networks with new methodology for bounding the RC of the soft-thresholding operator, that allows a better understanding of the generalization ability of model based networks.
• The obtained bounds translate to practical design rules for model-based networks which guarantee high generalization. In particular, we show that a nonincreasing GE as a function of the network’s depth is achievable, by limiting the weights’ norm in the network. This improves over existing bounds, which exhibit a logarithmic increase of the GE with depth (Schnoor et al., 2021). The GE bounds of the model-based networks suggest that under similar restrictions, learned ISTA networks generalize better than learned ADMM networks.
• We also exploit the LRC machinery to derive bounds on the EE of feed-forward networks, such as ReLU, ISTA, and ADMM networks. The EE bounds depend on the data distribution and training loss. We show that the model-based networks achieve lower EE bounds compared to ReLU networks.
• We focus on the differences between ISTA and ReLU networks, in term of performance and generalization. This is done through a series of experiments for sparse vector recovery problems. The experiments indicate that the generalization abilities of ISTA networks are controlled by the soft-threshold value. For a proper choice of parameters, ISTA achieves lower EE along with more accurate recovery. The dependency of the EE as a function of λ and the number of training samples can be explained by the derived EE bounds.
1.2 RELATED WORK
Understanding the GE and EE of general deep learning algorithms is an active area of research. A few approaches were proposed, which include considering networks of weights matrices with bounded norms (including spectral and L2,1 norms) (Bartlett et al., 2017; Sokolić et al., 2017), and analyzing the effect of multiple regularizations employed in deep learning, such as weight decay, early stopping, or drop-outs, on the generalization abilities (Neyshabur et al., 2015a; Gao & Zhou, 2016; Amjad et al., 2021). Additional works consider global properties of the networks, such as a bound on the product of all Frobenius norms of the weight matrices in the network (Golowich et al., 2018). However, these available bounds do not capture the GE behaviour as a function of network depth, where an increase in depth typically results in improved generalization. This also applies to the bounds on the GE of ReLU networks, detailed in Section 2.3.
Recently, a few works focused on bounding the GE specifically for deep iterative recovery algorithms (Behboodi et al., 2020; Schnoor et al., 2021). They focus on a broad class of unfolded networks for sparse recovery, and provide bounds which scale logarithmically with the number of layers (Schnoor et al., 2021). However, these bounds still do not capture the behaviours experienced in practice.
Much work has also focused on incorporating the networks’ training process into the bounds. The LRC framework due to Bartlett, Bousquet, and Mendelson (Bartlett et al., 2005) assumes that the training process results in a smaller class of estimation functions, such that the distance between the estimator in the class and the empirical risk minimizer (ERM) is bounded. An additional related framework is the effective dimensionality due to Zhang (Zhang, 2002). These frameworks result in
different bounds, which relate the EE to the distance between the estimators. These local complexity measures were not applied to model-based neural networks.
Throughout the paper we use boldface lowercase and uppercase letters to denote vectors and matrices respectively. The L1 and L2 norms of a vector x are written as ∥x∥1 and ∥x∥2 respectively, and the L∞ (which corresponds to the maximal L1 norm over the matrix’s rows) and spectral norms of a matrix X , are denoted by ∥X∥∞ and ∥X∥σ respectively. We denote the transpose operation by (·)T . For any function f and class of functions H, we define f ◦ H = {x 7→ f ◦ h(x) : h ∈ H} .
2 PRELIMINARIES
2.1 NETWORK ARCHITECTURE
We focus on model-based networks for sparse vector recovery, applicable to the linear inverse problem
y = Ax+ e (1)
where y ∈ Rny is the observation vector with ny entries, x ∈ Rnx is the target vector with nx entries, with nx > ny , A ∈ Rny×nx is the linear operator, and e ∈ Rny is additive noise. The target vectors are sparse with sparsity rate ρ, such that at most ⌊ρnx⌋ entries are nonzero. The inverse problem consists of recovering the target vector x, from the observation vector y.
Given that the target vector is assumed to be sparse, recovering x from y in (1) can be formulated as an optimization problem, such as least absolute shrinkage and selection operator (LASSO) (Tibshirani & Ryan, 2013), that can be solved with well-known iterative methods including ISTA and ADMM. To address more complex problems, such as an unknown linear mapping A, and to avoid having to fine tune parameters, these algorithms can be mapped into model-based neural networks using unfolding or unrolling techniques (Gregor & LeCun, 2010; Boyd et al., 2011; Monga et al., 2021; Yang et al., 2018). The network’s architecture imitates the original iterative method’s functionality and enables to learn the models’ parameters with respect to a set of training examples.
We consider neural networks with L layers (referred to as the network’s depth), which corresponds to the number of iterations in the original iterative algorithm. The layer outputs of an unfolded ISTA network hlI , l ∈ [1, L], are defined by the following recurrence relation, shown in Fig. 1:
hlI = Sλ ( W lhl−1I + b ) , h0I = Sλ(b) (2)
where W l ∈ Rnx×nx , l ∈ [1, L] are the weights matrices corresponding to each of the layers, with bounded norms ||W l||∞ ≤ Bl. We further assume that the L2 norm of W 1 is bounded by B1. The vector b = ATy is a constant bias term that depends on the observation y, where we assume that the initial values are bounded, such that ||h0I ||1 ≤ B0. In addition, Sλ(·) is the elementwise soft-thresholding operator Sλ(h) = sign(h)max(|h| − λ,0) (3) where the functions sign(·) and max(·) are applied elementwise, and 0 is a vector of zeros. As Sλ(·) is an elementwise function it preserves the input’s dimension, and can be applied on scalar or vector inputs. The network’s prediction is given by the last layer in the network x̂ = hLI (y). We note that the estimators are functions mapping y to x̂, hLI : Rny −→ Rnx , characterized by the weights, i.e. hLI = h L I ({W l}Ll=1). The class of functions representing the output at depth L in an ISTA network, is HLI = {hLI ({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. Similarly, the lth layer of unfolded ADMM is defined by the following recurrence relation
hlA = W l ( zl−1 + ul−1 ) + b
zl = Sλ ( hlA − ul−1 ) , z0 = 0
ul = ul−1 − γ ( hlA − zl ) , u0 = 0
(4)
where 0 is a vector of zeros, b = ATy is a constant bias term, and γ > 0 is the step size derived by the original ADMM algorithm, as shown in Fig. 1. The estimators satisfy hLA : Rny −→ Rnx , and the class of functions representing the output at depth L in an ADMM network, is HLA = {hLA({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}, where we impose the same assumptions on the weights matrices as ISTA networks.
For a depth-L ISTA or ADMM network, the learnable parameters are the weight matrices {W l}Ll=1. The weights are learnt by minimizing a loss function L on a set of m training examples S = {(xi,yi)}mi=1, drawn from an unknown distribution D, consistent with the model in (1). We consider the case where the per-example loss function is obtained by averaging over the example per-coordinate losses:
L (h(y),x) = 1 nx nx∑ j=1 ℓ (hj(y),xj) (5)
where hj(y) and xj denote the jth coordinate of the estimated and true targets, and ℓ : R×R −→ R+ is 1-Lipschitz in its first argument. This requirement is satisfied in many practical settings, for example with the p-power of Lp norms, and is also required in a related work (Xu et al., 2016). The loss of an estimator h ∈ H which measures the difference between the true value x and the estimation h(y), is denoted for convenience by L(h) = L (h(y),x). There exists additional forms of learned ISTA and ADMM networks, which include learning an additional set of weight matrices affecting the bias terms (Monga et al., 2021). Also, the optimal value of λ generally depends on the target vector sparsity level. Note however that for learned networks, the value of λ at each layer can also be learned. However, here we focus on a more basic architecture with fixed λ in order to draw theoretical conclusions.
2.2 GENERALIZATION AND ESTIMATION ERRORS
In this work, we focus on upper bounding the GE and EE of the model-based neural networks of Fig. 1. The GE of a class of estimation functions h ∈ H, such that h : Rny −→ Rnx , is defined as
G(H) = ES sup h∈H LD(h)− LS(h) (6)
where LD(h) = EDL(h) is the expected loss with respect to the data distribution D (the joint probability distribution of the targets and observations), LS(h) = 1m ∑m i=1 L(h(yi),xi) is the average empirical loss with respect to the training set S, and ES is the expectation over the training datasets. The GE is a global property of the class of estimators, which captures how the class of estimators H is suited to the learning problem. Large GE implies that there are hypotheses in H for which LD deviates much from LS , on average over S. However, the GE in (6) does not capture how the learning algorithm chooses the estimator h ∈ H. In order to capture the effect of the learning algorithm, we consider local properties of the class of estimators, and focus on bounding the estimation error (EE)
E(H) = LD ( ĥ ) − inf
h∈H LD (h) (7)
where ĥ is the ERM satisfying LS(ĥ) = infh∈H LS(h). We note that the ERM approximates the learned estimator hlearned, which is obtained by training the network on a set of training examples S, using algorithms such as SGD. However, the estimator hlearned depends on the optimization algorithm, and can differ from the ERM. The difference between the empirical loss associated with ĥ and the empirical loss associated with hlearned is usually referred to as the optimization error.
Common deep neural network architectures have large GE compared to the low EE achieved in practice. This still holds, when the networks are not trained with explicit regularization, such as weight decay, early stopping, or drop-outs (Srivastava et al., 2014; Neyshabur et al., 2015b). This empirical phenomena is experienced across various architectures and hyper-parameter choices (Liang et al., 2019; Novak et al., 2018; Lee et al., 2018; Neyshabur et al., 2018).
2.3 RADEMACHER COMPLEXITY BASED BOUNDS
The RC is a standard tool which captures the ability of a class of functions to approximate noise, where a lower complexity indicates a reduced generalization error. Formally, the empirical RC of a class of scalar estimators H, such that h : Rny −→ R for h ∈ H, over m samples is
Rm (H) = E{ϵi}mi=1 sup h∈H
1
m m∑ i=1 ϵih(yi) (8)
where {ϵi}mi=1 are independent Rademacher random variables for which Pr (ϵi = 1) = Pr (ϵi = −1) = 1/2, the samples {yi}mi=1 are obtained by the model in (1) from m i.i.d. target vectors {xi}mi=1 drawn from an unknown distribution. Taking the expectation of the RC of L ◦H with respect to the set of examples S presented in Section 2.1, leads to a bound on the GE
G(H) ≤ 2ESRm (L ◦H) (9) where L is defined in Section 2.1 and L ◦H = {(x,y) 7→ L(h(y),x) : h ∈ H} (Shalev-Shwartz & Ben-David, 2014). We observe that the class of functions L◦H consists of scalar functions, such that f : Rnx × Rnx −→ R for f ∈ L ◦H. Therefore, in order to bound the GE of the class of functions defined by ISTA and ADMM networks, we can first bound their RC.
Throughout the paper, we compare the model-based architectures with a feed forward network with ReLU activations, given by ReLU(h) = max (h,0). In this section, we review existing bounds for the generalization error of these networks. The layers of a ReLU network hlR, ∀l ∈ [1, L], are defined by the following recurrence relation hlR = ReLU ( W lhl−1R + b ) , and h0R = ReLU(b), where W l, ∀l ∈ [1, L] are the weight matrices which satisfy the same conditions as the weight matrices of ISTA and ADMM networks. The class of functions representing the output at depth L in a ReLU network, is HLR = {hLR({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. This architecture leads to the following bound on the GE. Theorem 1 (Generalization error bound for ReLU networks (Gao & Zhou, 2016)). Consider the class of feed forward networks of depth-L with ReLU activations, HLR, as described in Section 2.3, and m i.i.d. training samples. Given a 1-Lipschitz loss function, its GE satisfies G ( HLR ) ≤ 2GlR,
where GlR = B0 ∏L l=1 Bl√ m .
Proof. Follows from applying the bound of the RC of ReLU neural networks from (Gao & Zhou, 2016) and combining it with (9).
The bound in Theorem 1 is satisfied for any feed forward network with 1-Lipschitz nonlinear activations (including ReLU), and can be generalized for networks with activations with different Lipshcitz constants. We show in Theorem 2, that the bound presented in Theorem 1 cannot be substantially improved for ReLU networks with the RC framework. Theorem 2 (Lower Rademacher complexity bound for ReLU networks (Bartlett et al., 2017)). Consider the class of feed forward networks of depth-L with ReLU activations, where the weight matrices have bounded spectral norm ||W l||σ ≤ B′l, l ∈ [1, L]. The dimension of the output layer is 1, and the dimension of each non-output layer is at least 2. Given m i.i.d. training samples, there exists a c such that Rm ( H′,LR ) ≥ cB0 ∏L l=1B ′ l , where H ′,L R = {hLR({W l}Ll=1) : ||W l||σ ≤ B′l l ∈ [1, L]}.
This result shows that using the RC framework the GE of ReLU networks behaves as the product of the weight matrices’ norms ∏L l=1Bl, as captured in Theorem 1. Theorem 2, implies that the dependence on the weight matrices’ norms, cannot be substantially improved with the RC framework for ReLU networks.
3 GENERALIZATION ERROR BOUNDS: GLOBAL PROPERTIES
In this section, we derive theoretical bounds on the GE of learned ISTA and ADMM networks. From these bounds we deduce design rules to construct ISTA and ADMM networks with a GE which does not increase exponentially with the number of layers. We start by presenting theoretical guarantees on the RC of any class of functions, after applying the soft-thresholding operation.
Soft-thresholding is a basic block that appears in multiple iterative algorithms, and therefore is used as the nonlinear activation in many model-based networks. It results from the proximal gradient of the L1 norm (Palomar & Eldar, 2010). We therefore start by presenting the following lemma which expresses how the RC of a class of functions is affected by applying soft-thresholding to each function in the class. The proof is provided in the supplementary material. Lemma 1 (Rademacher complexity of soft-thresholding). Given any class of scalar functions H where h : Rn −→ R, h ∈ H for any integer n ≥ 1, and m i.i.d. training samples,
Rm (Sλ ◦ H) ≤ Rm (H)− λT
m , (10) where T = ∑m i=1 Ti and Tj = E{ϵi}mi=2 ( 1h⋆(yj)>λ∧h′⋆(yj)<−λ ) such that h∗, h′∗ =
argmaxh,h′∈H ( h(y1)− h′(y1)− 2λ1h(y1)>λ∧h′(y1)<−λ + ∑m i=2 ϵi(Sλ(h(yi)) + Sλ(h′(yi))) ) .
The quantity T is a non-negative value obtained during the proof, which depends on the networks’ number of layers, underlying data distribution D and soft-threshold value λ. As seen from (10), the value of T dictates the reduction in RC due to soft-thresholding, where a reduction in the RC can also be expected. The value of T increases as λ decreases. In the case that λT increases with λ, higher values of λ further reduce the RC of the class of functions H, due to the soft-thresholding. We now focus on the class of functions representing the output of a neuron at depth L in an ISTA network, HLI . In the following theorem, we bound its GE using the RC framework and Lemma 1. The proof is provided in the supplementary material. Theorem 3 (Generalization error bound for ISTA networks). Consider the class of learned ISTA networks of depth L as described in (2), and m i.i.d. training samples. Then there exist T (l) for
l ∈ [1, L] in the range T (l) ∈ [ 0,min { mBlG l−1 I λ ,m }] such that G ( HLI ) ≤ 2ESGLI , where
GlI = B0∏ll′=1Bl′√ m − λ m l−1∑ l′=1 T (l ′) l∏ j=l′+1 Bj − λT (l) m . (11) Next, we show that for a specific distribution, the expected value of T (l) is greater than 0. Under an additional bound on the expectation value of the estimators (specified in the supplementary material)
ES ( T (l) ) ≥ max { m ( 1− 2e−(cBlb (l)−λ) ) , 0 }
(12)
where b(l+1) = Blb(l)−λ, b(1) = B0, and c ∈ (0, 1]. Increasing λ or decreasing B, will decrease the bound in (12), since crossing the threshold is less probable. Depending on λ and {Bl}Ll=1 (specifically that λ ≤ cBlb(l) + ln 2), the bound in (12) is positive, and enforces a non-zero reduction in the GE. Along with Theorem 3, this shows the expected reduction in GE of ISTA networks compared to ReLU netowrks. The reduction is controlled by the value of the soft threshold.
To obtain a more compact relation, we can choose the maximal matrices’ norm B = maxl∈[1,L]Bl, and denote T = minl∈[1,L] T (l) ∈ [0,m] which leads to G ( HLI ) ≤ 2 ( B0B L √ m − λES(T )m BL−1 B−1 ) .
Comparing this bound with the GE bound for ReLU networks presented in Theorem 1, shows the expected reduction due to the soft thresholding activation. This result also implies practical rules for designing low generalization error ISTA networks. We note that the network’s parameters such as the soft-threshold value λ and number of samples m, are predefined by the model being solved (for example, in ISTA, the value of λ is chosen according to the singular values of A).
We derive an implicit design rule from (3), for a nonincreasing GE, as detailed in Section A.2. This is done by restricting the matrices’ norm to satisfy B ≤ 1 + λES(T )√
mB0 . Moreover, these results can
be extended to convolutional neural networks. As convolution operations can be expressed via multiplication with a convolution matrix, the presented results are also satisfied in that case.
Similarly, we bound the GE of the class of functions representing the output at depth L in an ADMM network, HLA. The proof and discussion are provided in the supplementary material. Theorem 4 (Generalization error bound for ADMM networks). Consider the class of learned ADMM networks of depth L as described in (4), and m i.i.d. training samples. Then there exist
T (l) for l ∈ [1, L− 1] in the interval T (l) ∈ [ 0,min { mB̃lG l−1 A λ̃ ,m }] where GlA = B0 ∏l−1 l′=1 B̃l′√ m
− λ̃ m ∑l−2 l′=1 T (l′) ∏l−1 j=l′+1 B̃j − λ̃T (l−1)
m , and λ̃ = (1 + γ)λ, B̃l = (1 + 2γ)(Bl + 2), l ∈ [1, L], such that G ( HLA ) ≤ 2B̃LESGL−1A .
We compare the GE bounds for ISTA and ADMM networks, to the bound on the GE of ReLU networks presented in Theorem 1. We observe that both model-based networks achieve a reduction in the GE, which depends on the soft-threshold, the underlying data distribution, and the bound on the norm of the weight matrices. Following the bound, we observe that the soft-thresholding nonlinearity is most valuable in the case of small number of training samples. The soft-thresholding nonlinearity is the key that enables reducing the GE of the ISTA and ADMM networks compared to ReLU networks. Next, we focus on bounding the EE of feed-forward networks based on the LRC framework.
4 ESTIMATION ERROR BOUNDS: LOCAL PROPERTIES
To investigate the model-based networks’ EE, we use the LRC framework (Bartlett et al., 2005). Instead of considering the entire class of functions H, the LRC considers only estimators which are close to the optimal estimator Hr = { h ∈ H : ED ∥h(y)− h∗(y)∥22 ≤ r } , where h∗ is such that LD(h ∗) = infh∈H LD(h). It is interesting to note that the class of estimators Hr only restricts the distance between the estimators themselves, and not between their corresponding losses. Following the LRC framework, we consider target vectors ranging in [−1, 1]nx . Therefore, we adapt the networks’ estimations by clipping them to lie in the interval [−1, 1]nx . In our case we consider the restricted classes of functions representing the output of a neuron at depth L in ISTA, ADMM, and ReLU networks. Moreover, we denote by W l and W l,∗, l ∈ [1, L] the weight matrices corresponding to h ∈ Hr and h∗, respectively. Based on these restricted class of functions, we present the following assumption and theorem (the proof is provided in the supplementary material). Assumption 1. There exists a constant C ≥ 1 such that for every probability distribution D, and estimator h ∈ H, ED ∑nx j=1(hj − h∗j )2 ≤ CED ∑nx j=1 ( ℓ(hj)− ℓ(h∗j ) ) , where hj and h∗j denote the jth coordinate of the estimators.
As pointed out in (Bartlett & Mendelson, 2002), this condition usually follows from a uniform convexity condition on the loss function ℓ. For instance, if |h(y)− x| ≤ 1 for any h ∈ H,y ∈ Rny and x ∈ Rnx , then the condition is satisfied with C = 1 (Yousefi et al., 2018). Theorem 5 (Estimation error bound of ISTA, ADMM, and ReLU networks). Consider the class of functions represented by depth-L ISTA networks HLI as detailed in Section 2.1, m i.i.d training samples, and a per-coordinate loss satisfying Assumption 1 with a constant C. Let ||W l − W l,∗||∞ ≤ α √ r for some α > 0. Moreover, B ≥ max{α √ r, 1}. Then there exists
T in the interval T ∈ [ 0,min {√ mB0B L−12L λη ,m }] , where η = LB L−1(B−1)−BL+1 (B−1)2 , such that for any s > 0 with probability at least 1− e−s,
E ( HLI ) ≤ 41r∗ + 17C 2 + 48C
mnx s (13)
where r∗ = C2α2 (
B0B L−12L√ m − λTm η )2 . The bound is also satisfied for the class of functions
represented by depth-L ADMM networks HLA, with r∗ = C2α2 ( B0B̃ L−22L−1√ m − λ̃Tm η̃ )2 , where λ̃ = (1 + γ)λ, B̃ = (1 + 2γ)(B + 2), and η̃ = (L−1)B̃ L−2(B̃−1)−B̃L−1+1
(B̃−1)2 , and for the class of functions represented by depth-L ReLU networks HLR, with r∗ = C2α2 ( B0B L−12L√ m )2 .
From Theorem 5, we observe that the EE decreases by a factor of O (1/m), instead of a factor of O (1/ √ m) obtained for the GE. This result complies with previous local bounds which yield faster convergence rates compared to global bounds (Blanchard et al., 2007; Bartlett et al., 2005). Also, the value of α relates the maximal distance between estimators in Hr denoted by r, to the distance between their corresponding weight matrices ||W l −W l,∗||∞ ≤ α √ r, l ∈ [1, L]. Tighter bounds on the distance between the weight matrices allow us to choose a smaller value for α, resulting in smaller values r∗ which improve the EE bounds. The value of α could depend on the network’s nonlinearity, underlying data distribution D, and number of layers. Note that the bounds of the model-based architectures depend on the soft-thresholding through the value of λES(T ). As λES(T ) increases, the bound on the EE decreases, which emphasizes the nonlinearity’s role in the network’s generalization abilities. Due to the soft-thresholding, ISTA and ADMM networks result in lower EE bounds compared to the bound for ReLU networks. It is interesting to note, that as the number of training samples m increases, the difference between the bounds on the model-based and ReLU networks is less significant. In the EE bounds of model-based networks, the parameter B0 relates the bound to the sparsity level ρ, of the target vectors. Lower values of ρ result in lower EE bounds, as demonstrated in the supplementary.
5 NUMERICAL EXPERIMENTS
In this section, we present a series of experiments that concentrate on how a particular model-based network (ISTA network) compares to a ReLU network, and showcase the merits of model-based networks. We focus on networks with 10 layers (similar to previous works (Gregor & LeCun, 2010)), to represent realistic model-based network architectures. The networks are trained on a simulated dataset to solve the problem in (1), with target vectors uniformly distributed in [−1, 1]. The linear mapping A is constructed from the real part of discrete Fourier transform (DFT) matrix rows (Ong et al., 2019), where the rows are randomly chosen. The sparsity rate is ρ = 0.15, and the noise’s standard deviation is 0.1. To train the networks we used the SGD optimizer with the L1 loss over all neurons of the last layer. The target and noise vectors are generated as element wise independently from a uniform distribution ranging in [−1, 1] . All results are reproducible through (Authors, 2022) which provides the complete code to execute the experiments presented in this section.
We concentrate on comparing the networks’ EE, since in practice, the networks are trained with a finite number of examples. In order to empirically approximate the EE of a class of networks H, we use an empirical approximation of h∗ (which satisfy LD(h∗) = infh∈H LD(h)) and the ERM ĥ, denoted by h∗emp and ĥemp respectively. The estimator h ∗ emp results from the trained network with 104 samples, and the ERM is approximated by a network trained using SGD with m training samples (where m ≤ 104). The empirical EE is given by their difference LD(ĥemp)− LD(h∗emp). In Fig. 2, we compare between the ISTA and ReLU networks, in terms of EE and L1 loss, for networks trained with different number of samples (between 10 and 104 samples). We observe that for small number of training samples, the ISTA network substantially reduces the EE compared to the ReLU network. This can be understood from Theorem 5, which results in lower EE bounds on the ISTA networks compared to ReLU networks, due to the term λES(T ). However, for large number of samples the EE of both networks decreases to zero, which is also expected from Theorem 5. This highlights that the contribution of the soft-thresholding nonlinearity to the generalization abilities of the network is more significant for small number of training samples. Throughout the paper, we considered networks with constant bias terms. In this section, we also consider learned bias terms, as detailed in the supplementary material. In Fig. 2, we present the experimental results for networks with constant and learned biases. The experiments indicate that the choice of constant or learned bias is less significant to the EE or the accuracy, compared to the choice of nonlinearity, emphasizing the relevance of the theoretical guarantees. The cases of learned and constant biases have different optimal estimators, as the networks with learned biases have more learned parameters. As a result, it is plausible a network with more learnable parameters (the learnable bias) exhibits a lower estimation error since the corresponding optimal estimator also exhibits a lower estimation error. To analyze the effect of the soft-thresholding value on the generalization abilities of the ISTA network, we show in Fig. 3, the empirical EE for multiple values of λ. The experimental results demonstrate that for small number of samples, increasing λ reduces the EE. As expected from the EE bounds, for a large number of training samples, this dependency on the nonlinearity vanishes, and the EE is similar for all values
of λ. In Fig. 3b, we show the L1 loss of the ISTA networks for different values of λ. We observe that low estimation error does not necessarily lead to low loss value. For m ≲ 100, increasing λ reduced the EE. These results suggest that given ISTA networks with different values of λ such that all networks achieve similar accuracy, the networks with higher values of λ provide lower EE. These results are also valid for additional networks’ depths. In Section B, we compare the EE for networks with different number of layers, and show that they exhibit a similar behaviour.
6 CONCLUSION
We derived new GE and EE bounds for ISTA and ADMM networks, based on the RC and LRC frameworks. Under suitable conditions, the model-based networks’ GE is nonincreasing with depth, resulting in a substantial improvement compared to the GE bound of ReLU networks. The EE bounds explain EE behaviours experienced in practice, such as ISTA networks demonstrating higher estimation abilities, compared to ReLU networks, especially for small number of training samples. Through a series of experiments, we show that the generalization abilities of ISTA networks are controlled by the soft-threshold value, and achieve lower EE along with a more accurate recovery compared to ReLU networks which increase the GE and EE with the networks’ depths.
It is interesting to consider how the theoretical insights can be harnessed to enforce neural networks with high generalization abilities. One approach is to introduce an additional regularizer during the training process that is rooted in the LRC, penalizing networks with high EE (Yang et al., 2019). | 1. What is the focus of the paper regarding model-based neural networks?
2. What are the strengths of the proposed approach, particularly in terms of its theoretical properties?
3. Do you have any concerns or questions about the paper's content, such as the comparison between ISTA networks and other activation functions or the choice of lambda?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers the theoretical properties of model-based neural networks which are usually interpretable and inherit the prior structure of the problems, such as ISTA and ADMM networks associated with soft thresholding operators. The bounds for both generalization and estimation errors of these networks are derived via the lens of Rademacher complexity and local Rademacher complexity, which are shown to be lower than that of the common ReLU neural networks under mild conditions. Numerical experiments are also provided to verify the theoretical results.
Strengths And Weaknesses
Strength
It is a novel contribution that broadens the understanding of model-based neural networks and their superiority compared to the standard ReLU networks.
The Rademacher complexity of function classes after applying the soft-thresholding operation is derived, which further leads to the generalization and estimation errors of model-based neural networks, such as ISTA and ADMM networks.
Codes are provided for the numerical experiments.
Concerns
The difference between ISTA networks and ReLU networks is that (2) uses a soft-thresholding operator
S
λ
and (10) uses ReLU. Can we think of
S
λ
as a special type of activation function? I was wondering if the performance of ISTA networks is also better than that of leaky ReLU networks or ELU networks?
In Figure 3, estimation error decreases as
λ
increases while
L
1
loss increases as
λ
increases, especially for large sample sizes. In practice, how to choose the best
λ
?
In Figure 2, it is a little bit confusing to me that the estimation error of ISTA with learned bias is larger than that with constant bias, any explanation?
Is it possible to extend the results to convolutional neural networks?
Minor typos:
Is the last term in (13)
λ
T
(
l
)
m
Clarity, Quality, Novelty And Reproducibility
The paper is well-organized and clearly written. The results are technically sound and novel. |
ICLR | Title
Generalization and Estimation Error Bounds for Model-based Neural Networks
Abstract
Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.
1 INTRODUCTION
Model-based neural networks provide unprecedented performance gains for solving sparse coding problems, such as the learned iterative shrinkage and thresholding algorithm (ISTA) (Gregor & LeCun, 2010) and learned alternating direction method of multipliers (ADMM) (Boyd et al., 2011). In practice, these approaches outperform feed-forward neural networks with ReLU nonlinearities.
These neural networks are usually obtained from algorithm unrolling (or unfolding) techniques, which were first proposed by Gregor and LeCun (Gregor & LeCun, 2010), to connect iterative algorithms to neural network architectures. The trained networks can potentially shed light on the problem being solved. For ISTA networks, each layer represents an iteration of a gradient-descent procedure. As a result, the output of each layer is a valid reconstruction of the target vector, and we expect the reconstructions to improve with the network’s depth. These networks capture original problem structure, which translates in practice to a lower number of required training data (Monga et al., 2021). Moreover, the generalization abilities of model-based networks tend to improve over regular feed-forward neural networks (Behboodi et al., 2020; Schnoor et al., 2021).
Understanding the generalization of deep learning algorithms has become an important open question. The generalization error of machine learning models measures the ability of a class of estimators to generalize from training to unseen samples, and avoid overfitting the training (Jakubovitz et al., 2019). Surprisingly, various deep neural networks exhibit high generalization abilities, even for increasing networks’ complexities (Neyshabur et al., 2015b; Belkin et al., 2019). Classical machine learning
∗This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research, the innovation programme (grant agreement No. 101000967), and the Israel Science Foundation under Grant 536/22. Y. C. Eldar and M. R. D. Rodrigues are supported by The Weizmann-UK Making Connections Programme (Ref. 129589). M. R. D. Rodrigues is also supported by the Alan Turing Institute. The authors wish to thank Dr. Gholamali Aminian from the Alan Turing Institute, UK, for his contribution to the proofs’ correctness.
measures such as the Vapnik-Chervonenkis (VC) dimension (Vapnik & Chervonenkis, 1991) and Rademacher complexity (RC) (Bartlett & Mendelson, 2002), predict an increasing generalization error (GE) with the increase of the models’ complexity, and fail to explain the improved generalization observed in experiments. More advanced measures consider the training process and result in tighter bounds on the estimation error (EE), were proposed to investigate this gap, such as the local Rademacher complexity (LRC) (Bartlett et al., 2005). To date, the EE of model based networks using these complexity measures has not been investigated to the best of our knowledge.
1.1 OUR CONTRIBUTIONS
In this work, we leverage existing complexity measures such as the RC and LRC, in order to bound the generalization and estimation errors of learned ISTA and learned ADMM networks.
• We provide new bounds on the GE of ISTA and ADMM networks, showing that the GE of model-based networks is lower than that of the common ReLU networks. The derivation of the theoretical guarantees combines existing proof techniques for computing the generalization error of multilayer networks with new methodology for bounding the RC of the soft-thresholding operator, that allows a better understanding of the generalization ability of model based networks.
• The obtained bounds translate to practical design rules for model-based networks which guarantee high generalization. In particular, we show that a nonincreasing GE as a function of the network’s depth is achievable, by limiting the weights’ norm in the network. This improves over existing bounds, which exhibit a logarithmic increase of the GE with depth (Schnoor et al., 2021). The GE bounds of the model-based networks suggest that under similar restrictions, learned ISTA networks generalize better than learned ADMM networks.
• We also exploit the LRC machinery to derive bounds on the EE of feed-forward networks, such as ReLU, ISTA, and ADMM networks. The EE bounds depend on the data distribution and training loss. We show that the model-based networks achieve lower EE bounds compared to ReLU networks.
• We focus on the differences between ISTA and ReLU networks, in term of performance and generalization. This is done through a series of experiments for sparse vector recovery problems. The experiments indicate that the generalization abilities of ISTA networks are controlled by the soft-threshold value. For a proper choice of parameters, ISTA achieves lower EE along with more accurate recovery. The dependency of the EE as a function of λ and the number of training samples can be explained by the derived EE bounds.
1.2 RELATED WORK
Understanding the GE and EE of general deep learning algorithms is an active area of research. A few approaches were proposed, which include considering networks of weights matrices with bounded norms (including spectral and L2,1 norms) (Bartlett et al., 2017; Sokolić et al., 2017), and analyzing the effect of multiple regularizations employed in deep learning, such as weight decay, early stopping, or drop-outs, on the generalization abilities (Neyshabur et al., 2015a; Gao & Zhou, 2016; Amjad et al., 2021). Additional works consider global properties of the networks, such as a bound on the product of all Frobenius norms of the weight matrices in the network (Golowich et al., 2018). However, these available bounds do not capture the GE behaviour as a function of network depth, where an increase in depth typically results in improved generalization. This also applies to the bounds on the GE of ReLU networks, detailed in Section 2.3.
Recently, a few works focused on bounding the GE specifically for deep iterative recovery algorithms (Behboodi et al., 2020; Schnoor et al., 2021). They focus on a broad class of unfolded networks for sparse recovery, and provide bounds which scale logarithmically with the number of layers (Schnoor et al., 2021). However, these bounds still do not capture the behaviours experienced in practice.
Much work has also focused on incorporating the networks’ training process into the bounds. The LRC framework due to Bartlett, Bousquet, and Mendelson (Bartlett et al., 2005) assumes that the training process results in a smaller class of estimation functions, such that the distance between the estimator in the class and the empirical risk minimizer (ERM) is bounded. An additional related framework is the effective dimensionality due to Zhang (Zhang, 2002). These frameworks result in
different bounds, which relate the EE to the distance between the estimators. These local complexity measures were not applied to model-based neural networks.
Throughout the paper we use boldface lowercase and uppercase letters to denote vectors and matrices respectively. The L1 and L2 norms of a vector x are written as ∥x∥1 and ∥x∥2 respectively, and the L∞ (which corresponds to the maximal L1 norm over the matrix’s rows) and spectral norms of a matrix X , are denoted by ∥X∥∞ and ∥X∥σ respectively. We denote the transpose operation by (·)T . For any function f and class of functions H, we define f ◦ H = {x 7→ f ◦ h(x) : h ∈ H} .
2 PRELIMINARIES
2.1 NETWORK ARCHITECTURE
We focus on model-based networks for sparse vector recovery, applicable to the linear inverse problem
y = Ax+ e (1)
where y ∈ Rny is the observation vector with ny entries, x ∈ Rnx is the target vector with nx entries, with nx > ny , A ∈ Rny×nx is the linear operator, and e ∈ Rny is additive noise. The target vectors are sparse with sparsity rate ρ, such that at most ⌊ρnx⌋ entries are nonzero. The inverse problem consists of recovering the target vector x, from the observation vector y.
Given that the target vector is assumed to be sparse, recovering x from y in (1) can be formulated as an optimization problem, such as least absolute shrinkage and selection operator (LASSO) (Tibshirani & Ryan, 2013), that can be solved with well-known iterative methods including ISTA and ADMM. To address more complex problems, such as an unknown linear mapping A, and to avoid having to fine tune parameters, these algorithms can be mapped into model-based neural networks using unfolding or unrolling techniques (Gregor & LeCun, 2010; Boyd et al., 2011; Monga et al., 2021; Yang et al., 2018). The network’s architecture imitates the original iterative method’s functionality and enables to learn the models’ parameters with respect to a set of training examples.
We consider neural networks with L layers (referred to as the network’s depth), which corresponds to the number of iterations in the original iterative algorithm. The layer outputs of an unfolded ISTA network hlI , l ∈ [1, L], are defined by the following recurrence relation, shown in Fig. 1:
hlI = Sλ ( W lhl−1I + b ) , h0I = Sλ(b) (2)
where W l ∈ Rnx×nx , l ∈ [1, L] are the weights matrices corresponding to each of the layers, with bounded norms ||W l||∞ ≤ Bl. We further assume that the L2 norm of W 1 is bounded by B1. The vector b = ATy is a constant bias term that depends on the observation y, where we assume that the initial values are bounded, such that ||h0I ||1 ≤ B0. In addition, Sλ(·) is the elementwise soft-thresholding operator Sλ(h) = sign(h)max(|h| − λ,0) (3) where the functions sign(·) and max(·) are applied elementwise, and 0 is a vector of zeros. As Sλ(·) is an elementwise function it preserves the input’s dimension, and can be applied on scalar or vector inputs. The network’s prediction is given by the last layer in the network x̂ = hLI (y). We note that the estimators are functions mapping y to x̂, hLI : Rny −→ Rnx , characterized by the weights, i.e. hLI = h L I ({W l}Ll=1). The class of functions representing the output at depth L in an ISTA network, is HLI = {hLI ({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. Similarly, the lth layer of unfolded ADMM is defined by the following recurrence relation
hlA = W l ( zl−1 + ul−1 ) + b
zl = Sλ ( hlA − ul−1 ) , z0 = 0
ul = ul−1 − γ ( hlA − zl ) , u0 = 0
(4)
where 0 is a vector of zeros, b = ATy is a constant bias term, and γ > 0 is the step size derived by the original ADMM algorithm, as shown in Fig. 1. The estimators satisfy hLA : Rny −→ Rnx , and the class of functions representing the output at depth L in an ADMM network, is HLA = {hLA({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}, where we impose the same assumptions on the weights matrices as ISTA networks.
For a depth-L ISTA or ADMM network, the learnable parameters are the weight matrices {W l}Ll=1. The weights are learnt by minimizing a loss function L on a set of m training examples S = {(xi,yi)}mi=1, drawn from an unknown distribution D, consistent with the model in (1). We consider the case where the per-example loss function is obtained by averaging over the example per-coordinate losses:
L (h(y),x) = 1 nx nx∑ j=1 ℓ (hj(y),xj) (5)
where hj(y) and xj denote the jth coordinate of the estimated and true targets, and ℓ : R×R −→ R+ is 1-Lipschitz in its first argument. This requirement is satisfied in many practical settings, for example with the p-power of Lp norms, and is also required in a related work (Xu et al., 2016). The loss of an estimator h ∈ H which measures the difference between the true value x and the estimation h(y), is denoted for convenience by L(h) = L (h(y),x). There exists additional forms of learned ISTA and ADMM networks, which include learning an additional set of weight matrices affecting the bias terms (Monga et al., 2021). Also, the optimal value of λ generally depends on the target vector sparsity level. Note however that for learned networks, the value of λ at each layer can also be learned. However, here we focus on a more basic architecture with fixed λ in order to draw theoretical conclusions.
2.2 GENERALIZATION AND ESTIMATION ERRORS
In this work, we focus on upper bounding the GE and EE of the model-based neural networks of Fig. 1. The GE of a class of estimation functions h ∈ H, such that h : Rny −→ Rnx , is defined as
G(H) = ES sup h∈H LD(h)− LS(h) (6)
where LD(h) = EDL(h) is the expected loss with respect to the data distribution D (the joint probability distribution of the targets and observations), LS(h) = 1m ∑m i=1 L(h(yi),xi) is the average empirical loss with respect to the training set S, and ES is the expectation over the training datasets. The GE is a global property of the class of estimators, which captures how the class of estimators H is suited to the learning problem. Large GE implies that there are hypotheses in H for which LD deviates much from LS , on average over S. However, the GE in (6) does not capture how the learning algorithm chooses the estimator h ∈ H. In order to capture the effect of the learning algorithm, we consider local properties of the class of estimators, and focus on bounding the estimation error (EE)
E(H) = LD ( ĥ ) − inf
h∈H LD (h) (7)
where ĥ is the ERM satisfying LS(ĥ) = infh∈H LS(h). We note that the ERM approximates the learned estimator hlearned, which is obtained by training the network on a set of training examples S, using algorithms such as SGD. However, the estimator hlearned depends on the optimization algorithm, and can differ from the ERM. The difference between the empirical loss associated with ĥ and the empirical loss associated with hlearned is usually referred to as the optimization error.
Common deep neural network architectures have large GE compared to the low EE achieved in practice. This still holds, when the networks are not trained with explicit regularization, such as weight decay, early stopping, or drop-outs (Srivastava et al., 2014; Neyshabur et al., 2015b). This empirical phenomena is experienced across various architectures and hyper-parameter choices (Liang et al., 2019; Novak et al., 2018; Lee et al., 2018; Neyshabur et al., 2018).
2.3 RADEMACHER COMPLEXITY BASED BOUNDS
The RC is a standard tool which captures the ability of a class of functions to approximate noise, where a lower complexity indicates a reduced generalization error. Formally, the empirical RC of a class of scalar estimators H, such that h : Rny −→ R for h ∈ H, over m samples is
Rm (H) = E{ϵi}mi=1 sup h∈H
1
m m∑ i=1 ϵih(yi) (8)
where {ϵi}mi=1 are independent Rademacher random variables for which Pr (ϵi = 1) = Pr (ϵi = −1) = 1/2, the samples {yi}mi=1 are obtained by the model in (1) from m i.i.d. target vectors {xi}mi=1 drawn from an unknown distribution. Taking the expectation of the RC of L ◦H with respect to the set of examples S presented in Section 2.1, leads to a bound on the GE
G(H) ≤ 2ESRm (L ◦H) (9) where L is defined in Section 2.1 and L ◦H = {(x,y) 7→ L(h(y),x) : h ∈ H} (Shalev-Shwartz & Ben-David, 2014). We observe that the class of functions L◦H consists of scalar functions, such that f : Rnx × Rnx −→ R for f ∈ L ◦H. Therefore, in order to bound the GE of the class of functions defined by ISTA and ADMM networks, we can first bound their RC.
Throughout the paper, we compare the model-based architectures with a feed forward network with ReLU activations, given by ReLU(h) = max (h,0). In this section, we review existing bounds for the generalization error of these networks. The layers of a ReLU network hlR, ∀l ∈ [1, L], are defined by the following recurrence relation hlR = ReLU ( W lhl−1R + b ) , and h0R = ReLU(b), where W l, ∀l ∈ [1, L] are the weight matrices which satisfy the same conditions as the weight matrices of ISTA and ADMM networks. The class of functions representing the output at depth L in a ReLU network, is HLR = {hLR({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. This architecture leads to the following bound on the GE. Theorem 1 (Generalization error bound for ReLU networks (Gao & Zhou, 2016)). Consider the class of feed forward networks of depth-L with ReLU activations, HLR, as described in Section 2.3, and m i.i.d. training samples. Given a 1-Lipschitz loss function, its GE satisfies G ( HLR ) ≤ 2GlR,
where GlR = B0 ∏L l=1 Bl√ m .
Proof. Follows from applying the bound of the RC of ReLU neural networks from (Gao & Zhou, 2016) and combining it with (9).
The bound in Theorem 1 is satisfied for any feed forward network with 1-Lipschitz nonlinear activations (including ReLU), and can be generalized for networks with activations with different Lipshcitz constants. We show in Theorem 2, that the bound presented in Theorem 1 cannot be substantially improved for ReLU networks with the RC framework. Theorem 2 (Lower Rademacher complexity bound for ReLU networks (Bartlett et al., 2017)). Consider the class of feed forward networks of depth-L with ReLU activations, where the weight matrices have bounded spectral norm ||W l||σ ≤ B′l, l ∈ [1, L]. The dimension of the output layer is 1, and the dimension of each non-output layer is at least 2. Given m i.i.d. training samples, there exists a c such that Rm ( H′,LR ) ≥ cB0 ∏L l=1B ′ l , where H ′,L R = {hLR({W l}Ll=1) : ||W l||σ ≤ B′l l ∈ [1, L]}.
This result shows that using the RC framework the GE of ReLU networks behaves as the product of the weight matrices’ norms ∏L l=1Bl, as captured in Theorem 1. Theorem 2, implies that the dependence on the weight matrices’ norms, cannot be substantially improved with the RC framework for ReLU networks.
3 GENERALIZATION ERROR BOUNDS: GLOBAL PROPERTIES
In this section, we derive theoretical bounds on the GE of learned ISTA and ADMM networks. From these bounds we deduce design rules to construct ISTA and ADMM networks with a GE which does not increase exponentially with the number of layers. We start by presenting theoretical guarantees on the RC of any class of functions, after applying the soft-thresholding operation.
Soft-thresholding is a basic block that appears in multiple iterative algorithms, and therefore is used as the nonlinear activation in many model-based networks. It results from the proximal gradient of the L1 norm (Palomar & Eldar, 2010). We therefore start by presenting the following lemma which expresses how the RC of a class of functions is affected by applying soft-thresholding to each function in the class. The proof is provided in the supplementary material. Lemma 1 (Rademacher complexity of soft-thresholding). Given any class of scalar functions H where h : Rn −→ R, h ∈ H for any integer n ≥ 1, and m i.i.d. training samples,
Rm (Sλ ◦ H) ≤ Rm (H)− λT
m , (10) where T = ∑m i=1 Ti and Tj = E{ϵi}mi=2 ( 1h⋆(yj)>λ∧h′⋆(yj)<−λ ) such that h∗, h′∗ =
argmaxh,h′∈H ( h(y1)− h′(y1)− 2λ1h(y1)>λ∧h′(y1)<−λ + ∑m i=2 ϵi(Sλ(h(yi)) + Sλ(h′(yi))) ) .
The quantity T is a non-negative value obtained during the proof, which depends on the networks’ number of layers, underlying data distribution D and soft-threshold value λ. As seen from (10), the value of T dictates the reduction in RC due to soft-thresholding, where a reduction in the RC can also be expected. The value of T increases as λ decreases. In the case that λT increases with λ, higher values of λ further reduce the RC of the class of functions H, due to the soft-thresholding. We now focus on the class of functions representing the output of a neuron at depth L in an ISTA network, HLI . In the following theorem, we bound its GE using the RC framework and Lemma 1. The proof is provided in the supplementary material. Theorem 3 (Generalization error bound for ISTA networks). Consider the class of learned ISTA networks of depth L as described in (2), and m i.i.d. training samples. Then there exist T (l) for
l ∈ [1, L] in the range T (l) ∈ [ 0,min { mBlG l−1 I λ ,m }] such that G ( HLI ) ≤ 2ESGLI , where
GlI = B0∏ll′=1Bl′√ m − λ m l−1∑ l′=1 T (l ′) l∏ j=l′+1 Bj − λT (l) m . (11) Next, we show that for a specific distribution, the expected value of T (l) is greater than 0. Under an additional bound on the expectation value of the estimators (specified in the supplementary material)
ES ( T (l) ) ≥ max { m ( 1− 2e−(cBlb (l)−λ) ) , 0 }
(12)
where b(l+1) = Blb(l)−λ, b(1) = B0, and c ∈ (0, 1]. Increasing λ or decreasing B, will decrease the bound in (12), since crossing the threshold is less probable. Depending on λ and {Bl}Ll=1 (specifically that λ ≤ cBlb(l) + ln 2), the bound in (12) is positive, and enforces a non-zero reduction in the GE. Along with Theorem 3, this shows the expected reduction in GE of ISTA networks compared to ReLU netowrks. The reduction is controlled by the value of the soft threshold.
To obtain a more compact relation, we can choose the maximal matrices’ norm B = maxl∈[1,L]Bl, and denote T = minl∈[1,L] T (l) ∈ [0,m] which leads to G ( HLI ) ≤ 2 ( B0B L √ m − λES(T )m BL−1 B−1 ) .
Comparing this bound with the GE bound for ReLU networks presented in Theorem 1, shows the expected reduction due to the soft thresholding activation. This result also implies practical rules for designing low generalization error ISTA networks. We note that the network’s parameters such as the soft-threshold value λ and number of samples m, are predefined by the model being solved (for example, in ISTA, the value of λ is chosen according to the singular values of A).
We derive an implicit design rule from (3), for a nonincreasing GE, as detailed in Section A.2. This is done by restricting the matrices’ norm to satisfy B ≤ 1 + λES(T )√
mB0 . Moreover, these results can
be extended to convolutional neural networks. As convolution operations can be expressed via multiplication with a convolution matrix, the presented results are also satisfied in that case.
Similarly, we bound the GE of the class of functions representing the output at depth L in an ADMM network, HLA. The proof and discussion are provided in the supplementary material. Theorem 4 (Generalization error bound for ADMM networks). Consider the class of learned ADMM networks of depth L as described in (4), and m i.i.d. training samples. Then there exist
T (l) for l ∈ [1, L− 1] in the interval T (l) ∈ [ 0,min { mB̃lG l−1 A λ̃ ,m }] where GlA = B0 ∏l−1 l′=1 B̃l′√ m
− λ̃ m ∑l−2 l′=1 T (l′) ∏l−1 j=l′+1 B̃j − λ̃T (l−1)
m , and λ̃ = (1 + γ)λ, B̃l = (1 + 2γ)(Bl + 2), l ∈ [1, L], such that G ( HLA ) ≤ 2B̃LESGL−1A .
We compare the GE bounds for ISTA and ADMM networks, to the bound on the GE of ReLU networks presented in Theorem 1. We observe that both model-based networks achieve a reduction in the GE, which depends on the soft-threshold, the underlying data distribution, and the bound on the norm of the weight matrices. Following the bound, we observe that the soft-thresholding nonlinearity is most valuable in the case of small number of training samples. The soft-thresholding nonlinearity is the key that enables reducing the GE of the ISTA and ADMM networks compared to ReLU networks. Next, we focus on bounding the EE of feed-forward networks based on the LRC framework.
4 ESTIMATION ERROR BOUNDS: LOCAL PROPERTIES
To investigate the model-based networks’ EE, we use the LRC framework (Bartlett et al., 2005). Instead of considering the entire class of functions H, the LRC considers only estimators which are close to the optimal estimator Hr = { h ∈ H : ED ∥h(y)− h∗(y)∥22 ≤ r } , where h∗ is such that LD(h ∗) = infh∈H LD(h). It is interesting to note that the class of estimators Hr only restricts the distance between the estimators themselves, and not between their corresponding losses. Following the LRC framework, we consider target vectors ranging in [−1, 1]nx . Therefore, we adapt the networks’ estimations by clipping them to lie in the interval [−1, 1]nx . In our case we consider the restricted classes of functions representing the output of a neuron at depth L in ISTA, ADMM, and ReLU networks. Moreover, we denote by W l and W l,∗, l ∈ [1, L] the weight matrices corresponding to h ∈ Hr and h∗, respectively. Based on these restricted class of functions, we present the following assumption and theorem (the proof is provided in the supplementary material). Assumption 1. There exists a constant C ≥ 1 such that for every probability distribution D, and estimator h ∈ H, ED ∑nx j=1(hj − h∗j )2 ≤ CED ∑nx j=1 ( ℓ(hj)− ℓ(h∗j ) ) , where hj and h∗j denote the jth coordinate of the estimators.
As pointed out in (Bartlett & Mendelson, 2002), this condition usually follows from a uniform convexity condition on the loss function ℓ. For instance, if |h(y)− x| ≤ 1 for any h ∈ H,y ∈ Rny and x ∈ Rnx , then the condition is satisfied with C = 1 (Yousefi et al., 2018). Theorem 5 (Estimation error bound of ISTA, ADMM, and ReLU networks). Consider the class of functions represented by depth-L ISTA networks HLI as detailed in Section 2.1, m i.i.d training samples, and a per-coordinate loss satisfying Assumption 1 with a constant C. Let ||W l − W l,∗||∞ ≤ α √ r for some α > 0. Moreover, B ≥ max{α √ r, 1}. Then there exists
T in the interval T ∈ [ 0,min {√ mB0B L−12L λη ,m }] , where η = LB L−1(B−1)−BL+1 (B−1)2 , such that for any s > 0 with probability at least 1− e−s,
E ( HLI ) ≤ 41r∗ + 17C 2 + 48C
mnx s (13)
where r∗ = C2α2 (
B0B L−12L√ m − λTm η )2 . The bound is also satisfied for the class of functions
represented by depth-L ADMM networks HLA, with r∗ = C2α2 ( B0B̃ L−22L−1√ m − λ̃Tm η̃ )2 , where λ̃ = (1 + γ)λ, B̃ = (1 + 2γ)(B + 2), and η̃ = (L−1)B̃ L−2(B̃−1)−B̃L−1+1
(B̃−1)2 , and for the class of functions represented by depth-L ReLU networks HLR, with r∗ = C2α2 ( B0B L−12L√ m )2 .
From Theorem 5, we observe that the EE decreases by a factor of O (1/m), instead of a factor of O (1/ √ m) obtained for the GE. This result complies with previous local bounds which yield faster convergence rates compared to global bounds (Blanchard et al., 2007; Bartlett et al., 2005). Also, the value of α relates the maximal distance between estimators in Hr denoted by r, to the distance between their corresponding weight matrices ||W l −W l,∗||∞ ≤ α √ r, l ∈ [1, L]. Tighter bounds on the distance between the weight matrices allow us to choose a smaller value for α, resulting in smaller values r∗ which improve the EE bounds. The value of α could depend on the network’s nonlinearity, underlying data distribution D, and number of layers. Note that the bounds of the model-based architectures depend on the soft-thresholding through the value of λES(T ). As λES(T ) increases, the bound on the EE decreases, which emphasizes the nonlinearity’s role in the network’s generalization abilities. Due to the soft-thresholding, ISTA and ADMM networks result in lower EE bounds compared to the bound for ReLU networks. It is interesting to note, that as the number of training samples m increases, the difference between the bounds on the model-based and ReLU networks is less significant. In the EE bounds of model-based networks, the parameter B0 relates the bound to the sparsity level ρ, of the target vectors. Lower values of ρ result in lower EE bounds, as demonstrated in the supplementary.
5 NUMERICAL EXPERIMENTS
In this section, we present a series of experiments that concentrate on how a particular model-based network (ISTA network) compares to a ReLU network, and showcase the merits of model-based networks. We focus on networks with 10 layers (similar to previous works (Gregor & LeCun, 2010)), to represent realistic model-based network architectures. The networks are trained on a simulated dataset to solve the problem in (1), with target vectors uniformly distributed in [−1, 1]. The linear mapping A is constructed from the real part of discrete Fourier transform (DFT) matrix rows (Ong et al., 2019), where the rows are randomly chosen. The sparsity rate is ρ = 0.15, and the noise’s standard deviation is 0.1. To train the networks we used the SGD optimizer with the L1 loss over all neurons of the last layer. The target and noise vectors are generated as element wise independently from a uniform distribution ranging in [−1, 1] . All results are reproducible through (Authors, 2022) which provides the complete code to execute the experiments presented in this section.
We concentrate on comparing the networks’ EE, since in practice, the networks are trained with a finite number of examples. In order to empirically approximate the EE of a class of networks H, we use an empirical approximation of h∗ (which satisfy LD(h∗) = infh∈H LD(h)) and the ERM ĥ, denoted by h∗emp and ĥemp respectively. The estimator h ∗ emp results from the trained network with 104 samples, and the ERM is approximated by a network trained using SGD with m training samples (where m ≤ 104). The empirical EE is given by their difference LD(ĥemp)− LD(h∗emp). In Fig. 2, we compare between the ISTA and ReLU networks, in terms of EE and L1 loss, for networks trained with different number of samples (between 10 and 104 samples). We observe that for small number of training samples, the ISTA network substantially reduces the EE compared to the ReLU network. This can be understood from Theorem 5, which results in lower EE bounds on the ISTA networks compared to ReLU networks, due to the term λES(T ). However, for large number of samples the EE of both networks decreases to zero, which is also expected from Theorem 5. This highlights that the contribution of the soft-thresholding nonlinearity to the generalization abilities of the network is more significant for small number of training samples. Throughout the paper, we considered networks with constant bias terms. In this section, we also consider learned bias terms, as detailed in the supplementary material. In Fig. 2, we present the experimental results for networks with constant and learned biases. The experiments indicate that the choice of constant or learned bias is less significant to the EE or the accuracy, compared to the choice of nonlinearity, emphasizing the relevance of the theoretical guarantees. The cases of learned and constant biases have different optimal estimators, as the networks with learned biases have more learned parameters. As a result, it is plausible a network with more learnable parameters (the learnable bias) exhibits a lower estimation error since the corresponding optimal estimator also exhibits a lower estimation error. To analyze the effect of the soft-thresholding value on the generalization abilities of the ISTA network, we show in Fig. 3, the empirical EE for multiple values of λ. The experimental results demonstrate that for small number of samples, increasing λ reduces the EE. As expected from the EE bounds, for a large number of training samples, this dependency on the nonlinearity vanishes, and the EE is similar for all values
of λ. In Fig. 3b, we show the L1 loss of the ISTA networks for different values of λ. We observe that low estimation error does not necessarily lead to low loss value. For m ≲ 100, increasing λ reduced the EE. These results suggest that given ISTA networks with different values of λ such that all networks achieve similar accuracy, the networks with higher values of λ provide lower EE. These results are also valid for additional networks’ depths. In Section B, we compare the EE for networks with different number of layers, and show that they exhibit a similar behaviour.
6 CONCLUSION
We derived new GE and EE bounds for ISTA and ADMM networks, based on the RC and LRC frameworks. Under suitable conditions, the model-based networks’ GE is nonincreasing with depth, resulting in a substantial improvement compared to the GE bound of ReLU networks. The EE bounds explain EE behaviours experienced in practice, such as ISTA networks demonstrating higher estimation abilities, compared to ReLU networks, especially for small number of training samples. Through a series of experiments, we show that the generalization abilities of ISTA networks are controlled by the soft-threshold value, and achieve lower EE along with a more accurate recovery compared to ReLU networks which increase the GE and EE with the networks’ depths.
It is interesting to consider how the theoretical insights can be harnessed to enforce neural networks with high generalization abilities. One approach is to introduce an additional regularizer during the training process that is rooted in the LRC, penalizing networks with high EE (Yang et al., 2019). | 1. What is the focus and contribution of the paper regarding generalization and estimation error bounds for model-based neural networks with soft-thresholding activation functions?
2. What are the strengths of the proposed approach, particularly in its mathematical analyses and novel results?
3. What are the weaknesses of the paper, especially regarding its claims and comparisons with other works like ReLU?
4. Do you have any questions regarding the proof of Lemma 1 or its implications for the main results of the work?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies generalization and estimation error bounds for model-based neural networks that employ soft-thresholding activation functions. By re-examining the proof of the contraction lemma (of Rademacher complexity) for soft-thresholding functions, the paper shows that applying soft-thresholding potentially decreases complexity (although the Lipschitz constant of soft-thresholding is 1).
Using this result, the paper establishes norm-based bounds on generalization error (GE) and estimation error (EE) of these networks for the problem of sparse vector recovery. The manuscript claims that the GE and EE bounds of the soft-thresholding network are better than those of ReLU. These findings are then validated through experiments.
Strengths And Weaknesses
Strengths
The paper provides theoretical insights into the simple (but important) intuition that model-based methods should perform better than model-free approaches. The main ideas and techniques of the work were developed based on well-founded rationales and in general, the paper is well-written with a sufficient literature review.
The mathematical analyses of the work are rigorous and seem correct. The results of the work, which relies on directly bounding the Rademacher complexity, are significantly stronger and more accessible (readable) than previous works (which focused on estimating Rademacher complexity through covering numbers).
The bound on Rademacher complexity of soft-thresholding is novel and meaningful and could potentially be of interest to a broader audience in the field beyond the context of model-based networks.
If the comment below (in the Weakness section) is well addressed, the bound of the Theorem shows that a nonincreasing GE as a function of the network’s depth is achievable, by limiting the weights’ norm in the network. This is an strong improvement over existing bounds, which exhibit a logarithmic increase of the GE with depth.
Weaknesses
In my opinion, the paper, in its current state, overstates the significance of its results. I don’t think the results support the claim that their bounds on GE and EE are better than those of ReLU.
While Lemma 1 establishes that applying soft-thresholding potentially decreases complexity, it doesn’t come with any estimation of the improvement (T) and it is not even clear that T is non-zero. In fact, the statements of the theorems all specify that T’s could be zero. The only place where T is estimated is in the discussion after the proof of Lemma 1 in the Appendix, which states that in certain scenarios, it is possible to bound T from below by an explicit constant depending on m. However, the result is written in a general context and it is unclear how this result applies to the studied networks.
I strongly suggest that these lower bounds on T are rigorously stated and proved to support the main results of the work. Discussions about the dependence of the improvements on lambda, are not correct if we are not certain that T is non-zero. An explicit bound, even if further assumptions are made, would provide more insights about the order of the improvements, which would be of broad interest to the readers.
Similarly, in its current state, Lemma 1 is trivial and doesn’t need its complicated proof. To prove its current statement (existence of T in [0, …] such that…), we just need to establish the inequality for T = 0, which can be directly deduced from the contraction lemma (soft-thresholding has Lipschitz constant 1).
Questions
In my understanding, the main idea of the proof of Lemma 1 is that although the Lipschitz constant of soft-thresholding is one (it is a translation in certain regions), there are sub-regions of RxR where |S(x) - S(y)| < |x - y| - 2\lambda, and the “improvement” depends on the measure of such sub-regions, specifically, on
{ h*(y) > \lambda) \text{and} h*’(y) < -lambda}
My question is, the ReLU function also ‘contract’ on this region, and the contraction is at least lambda, i.e., |S(x) - S(y)| < |x - y| - \lambda if x>\lambda and y<-\lambda. Does that also mean that we can obtain a similar result for ReLU using the same argument?
(Optional. This is not a part of my decision assessment) In the classical proof of the contraction lemma (e.g., Lemma 26.9 of Understanding Machine Learning), the absolute value in the bound can be removed and added when convenient because of the symmetry of h and h’. In the proof of Lemma 1, cases (a, b) is treated differently from (b, a). Is there any particular reason for that?
Additional feedbacks
Typos: Section 1.1: “logarithnmic”
Clarity, Quality, Novelty And Reproducibility
(see Strengths and Weaknesses) |
ICLR | Title
Generalization and Estimation Error Bounds for Model-based Neural Networks
Abstract
Model-based neural networks provide unparalleled performance for various tasks, such as sparse coding and compressed sensing problems. Due to the strong connection with the sensing model, these networks are interpretable and inherit prior structure of the problem. In practice, model-based neural networks exhibit higher generalization capability compared to ReLU neural networks. However, this phenomenon was not addressed theoretically. Here, we leverage complexity measures including the global and local Rademacher complexities, in order to provide upper bounds on the generalization and estimation errors of model-based networks. We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks, and derive practical design rules that allow to construct model-based networks with guaranteed high generalization. We demonstrate through a series of experiments that our theoretical insights shed light on a few behaviours experienced in practice, including the fact that ISTA and ADMM networks exhibit higher generalization abilities (especially for small number of training samples), compared to ReLU networks.
1 INTRODUCTION
Model-based neural networks provide unprecedented performance gains for solving sparse coding problems, such as the learned iterative shrinkage and thresholding algorithm (ISTA) (Gregor & LeCun, 2010) and learned alternating direction method of multipliers (ADMM) (Boyd et al., 2011). In practice, these approaches outperform feed-forward neural networks with ReLU nonlinearities.
These neural networks are usually obtained from algorithm unrolling (or unfolding) techniques, which were first proposed by Gregor and LeCun (Gregor & LeCun, 2010), to connect iterative algorithms to neural network architectures. The trained networks can potentially shed light on the problem being solved. For ISTA networks, each layer represents an iteration of a gradient-descent procedure. As a result, the output of each layer is a valid reconstruction of the target vector, and we expect the reconstructions to improve with the network’s depth. These networks capture original problem structure, which translates in practice to a lower number of required training data (Monga et al., 2021). Moreover, the generalization abilities of model-based networks tend to improve over regular feed-forward neural networks (Behboodi et al., 2020; Schnoor et al., 2021).
Understanding the generalization of deep learning algorithms has become an important open question. The generalization error of machine learning models measures the ability of a class of estimators to generalize from training to unseen samples, and avoid overfitting the training (Jakubovitz et al., 2019). Surprisingly, various deep neural networks exhibit high generalization abilities, even for increasing networks’ complexities (Neyshabur et al., 2015b; Belkin et al., 2019). Classical machine learning
∗This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research, the innovation programme (grant agreement No. 101000967), and the Israel Science Foundation under Grant 536/22. Y. C. Eldar and M. R. D. Rodrigues are supported by The Weizmann-UK Making Connections Programme (Ref. 129589). M. R. D. Rodrigues is also supported by the Alan Turing Institute. The authors wish to thank Dr. Gholamali Aminian from the Alan Turing Institute, UK, for his contribution to the proofs’ correctness.
measures such as the Vapnik-Chervonenkis (VC) dimension (Vapnik & Chervonenkis, 1991) and Rademacher complexity (RC) (Bartlett & Mendelson, 2002), predict an increasing generalization error (GE) with the increase of the models’ complexity, and fail to explain the improved generalization observed in experiments. More advanced measures consider the training process and result in tighter bounds on the estimation error (EE), were proposed to investigate this gap, such as the local Rademacher complexity (LRC) (Bartlett et al., 2005). To date, the EE of model based networks using these complexity measures has not been investigated to the best of our knowledge.
1.1 OUR CONTRIBUTIONS
In this work, we leverage existing complexity measures such as the RC and LRC, in order to bound the generalization and estimation errors of learned ISTA and learned ADMM networks.
• We provide new bounds on the GE of ISTA and ADMM networks, showing that the GE of model-based networks is lower than that of the common ReLU networks. The derivation of the theoretical guarantees combines existing proof techniques for computing the generalization error of multilayer networks with new methodology for bounding the RC of the soft-thresholding operator, that allows a better understanding of the generalization ability of model based networks.
• The obtained bounds translate to practical design rules for model-based networks which guarantee high generalization. In particular, we show that a nonincreasing GE as a function of the network’s depth is achievable, by limiting the weights’ norm in the network. This improves over existing bounds, which exhibit a logarithmic increase of the GE with depth (Schnoor et al., 2021). The GE bounds of the model-based networks suggest that under similar restrictions, learned ISTA networks generalize better than learned ADMM networks.
• We also exploit the LRC machinery to derive bounds on the EE of feed-forward networks, such as ReLU, ISTA, and ADMM networks. The EE bounds depend on the data distribution and training loss. We show that the model-based networks achieve lower EE bounds compared to ReLU networks.
• We focus on the differences between ISTA and ReLU networks, in term of performance and generalization. This is done through a series of experiments for sparse vector recovery problems. The experiments indicate that the generalization abilities of ISTA networks are controlled by the soft-threshold value. For a proper choice of parameters, ISTA achieves lower EE along with more accurate recovery. The dependency of the EE as a function of λ and the number of training samples can be explained by the derived EE bounds.
1.2 RELATED WORK
Understanding the GE and EE of general deep learning algorithms is an active area of research. A few approaches were proposed, which include considering networks of weights matrices with bounded norms (including spectral and L2,1 norms) (Bartlett et al., 2017; Sokolić et al., 2017), and analyzing the effect of multiple regularizations employed in deep learning, such as weight decay, early stopping, or drop-outs, on the generalization abilities (Neyshabur et al., 2015a; Gao & Zhou, 2016; Amjad et al., 2021). Additional works consider global properties of the networks, such as a bound on the product of all Frobenius norms of the weight matrices in the network (Golowich et al., 2018). However, these available bounds do not capture the GE behaviour as a function of network depth, where an increase in depth typically results in improved generalization. This also applies to the bounds on the GE of ReLU networks, detailed in Section 2.3.
Recently, a few works focused on bounding the GE specifically for deep iterative recovery algorithms (Behboodi et al., 2020; Schnoor et al., 2021). They focus on a broad class of unfolded networks for sparse recovery, and provide bounds which scale logarithmically with the number of layers (Schnoor et al., 2021). However, these bounds still do not capture the behaviours experienced in practice.
Much work has also focused on incorporating the networks’ training process into the bounds. The LRC framework due to Bartlett, Bousquet, and Mendelson (Bartlett et al., 2005) assumes that the training process results in a smaller class of estimation functions, such that the distance between the estimator in the class and the empirical risk minimizer (ERM) is bounded. An additional related framework is the effective dimensionality due to Zhang (Zhang, 2002). These frameworks result in
different bounds, which relate the EE to the distance between the estimators. These local complexity measures were not applied to model-based neural networks.
Throughout the paper we use boldface lowercase and uppercase letters to denote vectors and matrices respectively. The L1 and L2 norms of a vector x are written as ∥x∥1 and ∥x∥2 respectively, and the L∞ (which corresponds to the maximal L1 norm over the matrix’s rows) and spectral norms of a matrix X , are denoted by ∥X∥∞ and ∥X∥σ respectively. We denote the transpose operation by (·)T . For any function f and class of functions H, we define f ◦ H = {x 7→ f ◦ h(x) : h ∈ H} .
2 PRELIMINARIES
2.1 NETWORK ARCHITECTURE
We focus on model-based networks for sparse vector recovery, applicable to the linear inverse problem
y = Ax+ e (1)
where y ∈ Rny is the observation vector with ny entries, x ∈ Rnx is the target vector with nx entries, with nx > ny , A ∈ Rny×nx is the linear operator, and e ∈ Rny is additive noise. The target vectors are sparse with sparsity rate ρ, such that at most ⌊ρnx⌋ entries are nonzero. The inverse problem consists of recovering the target vector x, from the observation vector y.
Given that the target vector is assumed to be sparse, recovering x from y in (1) can be formulated as an optimization problem, such as least absolute shrinkage and selection operator (LASSO) (Tibshirani & Ryan, 2013), that can be solved with well-known iterative methods including ISTA and ADMM. To address more complex problems, such as an unknown linear mapping A, and to avoid having to fine tune parameters, these algorithms can be mapped into model-based neural networks using unfolding or unrolling techniques (Gregor & LeCun, 2010; Boyd et al., 2011; Monga et al., 2021; Yang et al., 2018). The network’s architecture imitates the original iterative method’s functionality and enables to learn the models’ parameters with respect to a set of training examples.
We consider neural networks with L layers (referred to as the network’s depth), which corresponds to the number of iterations in the original iterative algorithm. The layer outputs of an unfolded ISTA network hlI , l ∈ [1, L], are defined by the following recurrence relation, shown in Fig. 1:
hlI = Sλ ( W lhl−1I + b ) , h0I = Sλ(b) (2)
where W l ∈ Rnx×nx , l ∈ [1, L] are the weights matrices corresponding to each of the layers, with bounded norms ||W l||∞ ≤ Bl. We further assume that the L2 norm of W 1 is bounded by B1. The vector b = ATy is a constant bias term that depends on the observation y, where we assume that the initial values are bounded, such that ||h0I ||1 ≤ B0. In addition, Sλ(·) is the elementwise soft-thresholding operator Sλ(h) = sign(h)max(|h| − λ,0) (3) where the functions sign(·) and max(·) are applied elementwise, and 0 is a vector of zeros. As Sλ(·) is an elementwise function it preserves the input’s dimension, and can be applied on scalar or vector inputs. The network’s prediction is given by the last layer in the network x̂ = hLI (y). We note that the estimators are functions mapping y to x̂, hLI : Rny −→ Rnx , characterized by the weights, i.e. hLI = h L I ({W l}Ll=1). The class of functions representing the output at depth L in an ISTA network, is HLI = {hLI ({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. Similarly, the lth layer of unfolded ADMM is defined by the following recurrence relation
hlA = W l ( zl−1 + ul−1 ) + b
zl = Sλ ( hlA − ul−1 ) , z0 = 0
ul = ul−1 − γ ( hlA − zl ) , u0 = 0
(4)
where 0 is a vector of zeros, b = ATy is a constant bias term, and γ > 0 is the step size derived by the original ADMM algorithm, as shown in Fig. 1. The estimators satisfy hLA : Rny −→ Rnx , and the class of functions representing the output at depth L in an ADMM network, is HLA = {hLA({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}, where we impose the same assumptions on the weights matrices as ISTA networks.
For a depth-L ISTA or ADMM network, the learnable parameters are the weight matrices {W l}Ll=1. The weights are learnt by minimizing a loss function L on a set of m training examples S = {(xi,yi)}mi=1, drawn from an unknown distribution D, consistent with the model in (1). We consider the case where the per-example loss function is obtained by averaging over the example per-coordinate losses:
L (h(y),x) = 1 nx nx∑ j=1 ℓ (hj(y),xj) (5)
where hj(y) and xj denote the jth coordinate of the estimated and true targets, and ℓ : R×R −→ R+ is 1-Lipschitz in its first argument. This requirement is satisfied in many practical settings, for example with the p-power of Lp norms, and is also required in a related work (Xu et al., 2016). The loss of an estimator h ∈ H which measures the difference between the true value x and the estimation h(y), is denoted for convenience by L(h) = L (h(y),x). There exists additional forms of learned ISTA and ADMM networks, which include learning an additional set of weight matrices affecting the bias terms (Monga et al., 2021). Also, the optimal value of λ generally depends on the target vector sparsity level. Note however that for learned networks, the value of λ at each layer can also be learned. However, here we focus on a more basic architecture with fixed λ in order to draw theoretical conclusions.
2.2 GENERALIZATION AND ESTIMATION ERRORS
In this work, we focus on upper bounding the GE and EE of the model-based neural networks of Fig. 1. The GE of a class of estimation functions h ∈ H, such that h : Rny −→ Rnx , is defined as
G(H) = ES sup h∈H LD(h)− LS(h) (6)
where LD(h) = EDL(h) is the expected loss with respect to the data distribution D (the joint probability distribution of the targets and observations), LS(h) = 1m ∑m i=1 L(h(yi),xi) is the average empirical loss with respect to the training set S, and ES is the expectation over the training datasets. The GE is a global property of the class of estimators, which captures how the class of estimators H is suited to the learning problem. Large GE implies that there are hypotheses in H for which LD deviates much from LS , on average over S. However, the GE in (6) does not capture how the learning algorithm chooses the estimator h ∈ H. In order to capture the effect of the learning algorithm, we consider local properties of the class of estimators, and focus on bounding the estimation error (EE)
E(H) = LD ( ĥ ) − inf
h∈H LD (h) (7)
where ĥ is the ERM satisfying LS(ĥ) = infh∈H LS(h). We note that the ERM approximates the learned estimator hlearned, which is obtained by training the network on a set of training examples S, using algorithms such as SGD. However, the estimator hlearned depends on the optimization algorithm, and can differ from the ERM. The difference between the empirical loss associated with ĥ and the empirical loss associated with hlearned is usually referred to as the optimization error.
Common deep neural network architectures have large GE compared to the low EE achieved in practice. This still holds, when the networks are not trained with explicit regularization, such as weight decay, early stopping, or drop-outs (Srivastava et al., 2014; Neyshabur et al., 2015b). This empirical phenomena is experienced across various architectures and hyper-parameter choices (Liang et al., 2019; Novak et al., 2018; Lee et al., 2018; Neyshabur et al., 2018).
2.3 RADEMACHER COMPLEXITY BASED BOUNDS
The RC is a standard tool which captures the ability of a class of functions to approximate noise, where a lower complexity indicates a reduced generalization error. Formally, the empirical RC of a class of scalar estimators H, such that h : Rny −→ R for h ∈ H, over m samples is
Rm (H) = E{ϵi}mi=1 sup h∈H
1
m m∑ i=1 ϵih(yi) (8)
where {ϵi}mi=1 are independent Rademacher random variables for which Pr (ϵi = 1) = Pr (ϵi = −1) = 1/2, the samples {yi}mi=1 are obtained by the model in (1) from m i.i.d. target vectors {xi}mi=1 drawn from an unknown distribution. Taking the expectation of the RC of L ◦H with respect to the set of examples S presented in Section 2.1, leads to a bound on the GE
G(H) ≤ 2ESRm (L ◦H) (9) where L is defined in Section 2.1 and L ◦H = {(x,y) 7→ L(h(y),x) : h ∈ H} (Shalev-Shwartz & Ben-David, 2014). We observe that the class of functions L◦H consists of scalar functions, such that f : Rnx × Rnx −→ R for f ∈ L ◦H. Therefore, in order to bound the GE of the class of functions defined by ISTA and ADMM networks, we can first bound their RC.
Throughout the paper, we compare the model-based architectures with a feed forward network with ReLU activations, given by ReLU(h) = max (h,0). In this section, we review existing bounds for the generalization error of these networks. The layers of a ReLU network hlR, ∀l ∈ [1, L], are defined by the following recurrence relation hlR = ReLU ( W lhl−1R + b ) , and h0R = ReLU(b), where W l, ∀l ∈ [1, L] are the weight matrices which satisfy the same conditions as the weight matrices of ISTA and ADMM networks. The class of functions representing the output at depth L in a ReLU network, is HLR = {hLR({W l}Ll=1) : ||W l||∞ ≤ Bl l ∈ [1, L], ||W 1||2 ≤ B1}. This architecture leads to the following bound on the GE. Theorem 1 (Generalization error bound for ReLU networks (Gao & Zhou, 2016)). Consider the class of feed forward networks of depth-L with ReLU activations, HLR, as described in Section 2.3, and m i.i.d. training samples. Given a 1-Lipschitz loss function, its GE satisfies G ( HLR ) ≤ 2GlR,
where GlR = B0 ∏L l=1 Bl√ m .
Proof. Follows from applying the bound of the RC of ReLU neural networks from (Gao & Zhou, 2016) and combining it with (9).
The bound in Theorem 1 is satisfied for any feed forward network with 1-Lipschitz nonlinear activations (including ReLU), and can be generalized for networks with activations with different Lipshcitz constants. We show in Theorem 2, that the bound presented in Theorem 1 cannot be substantially improved for ReLU networks with the RC framework. Theorem 2 (Lower Rademacher complexity bound for ReLU networks (Bartlett et al., 2017)). Consider the class of feed forward networks of depth-L with ReLU activations, where the weight matrices have bounded spectral norm ||W l||σ ≤ B′l, l ∈ [1, L]. The dimension of the output layer is 1, and the dimension of each non-output layer is at least 2. Given m i.i.d. training samples, there exists a c such that Rm ( H′,LR ) ≥ cB0 ∏L l=1B ′ l , where H ′,L R = {hLR({W l}Ll=1) : ||W l||σ ≤ B′l l ∈ [1, L]}.
This result shows that using the RC framework the GE of ReLU networks behaves as the product of the weight matrices’ norms ∏L l=1Bl, as captured in Theorem 1. Theorem 2, implies that the dependence on the weight matrices’ norms, cannot be substantially improved with the RC framework for ReLU networks.
3 GENERALIZATION ERROR BOUNDS: GLOBAL PROPERTIES
In this section, we derive theoretical bounds on the GE of learned ISTA and ADMM networks. From these bounds we deduce design rules to construct ISTA and ADMM networks with a GE which does not increase exponentially with the number of layers. We start by presenting theoretical guarantees on the RC of any class of functions, after applying the soft-thresholding operation.
Soft-thresholding is a basic block that appears in multiple iterative algorithms, and therefore is used as the nonlinear activation in many model-based networks. It results from the proximal gradient of the L1 norm (Palomar & Eldar, 2010). We therefore start by presenting the following lemma which expresses how the RC of a class of functions is affected by applying soft-thresholding to each function in the class. The proof is provided in the supplementary material. Lemma 1 (Rademacher complexity of soft-thresholding). Given any class of scalar functions H where h : Rn −→ R, h ∈ H for any integer n ≥ 1, and m i.i.d. training samples,
Rm (Sλ ◦ H) ≤ Rm (H)− λT
m , (10) where T = ∑m i=1 Ti and Tj = E{ϵi}mi=2 ( 1h⋆(yj)>λ∧h′⋆(yj)<−λ ) such that h∗, h′∗ =
argmaxh,h′∈H ( h(y1)− h′(y1)− 2λ1h(y1)>λ∧h′(y1)<−λ + ∑m i=2 ϵi(Sλ(h(yi)) + Sλ(h′(yi))) ) .
The quantity T is a non-negative value obtained during the proof, which depends on the networks’ number of layers, underlying data distribution D and soft-threshold value λ. As seen from (10), the value of T dictates the reduction in RC due to soft-thresholding, where a reduction in the RC can also be expected. The value of T increases as λ decreases. In the case that λT increases with λ, higher values of λ further reduce the RC of the class of functions H, due to the soft-thresholding. We now focus on the class of functions representing the output of a neuron at depth L in an ISTA network, HLI . In the following theorem, we bound its GE using the RC framework and Lemma 1. The proof is provided in the supplementary material. Theorem 3 (Generalization error bound for ISTA networks). Consider the class of learned ISTA networks of depth L as described in (2), and m i.i.d. training samples. Then there exist T (l) for
l ∈ [1, L] in the range T (l) ∈ [ 0,min { mBlG l−1 I λ ,m }] such that G ( HLI ) ≤ 2ESGLI , where
GlI = B0∏ll′=1Bl′√ m − λ m l−1∑ l′=1 T (l ′) l∏ j=l′+1 Bj − λT (l) m . (11) Next, we show that for a specific distribution, the expected value of T (l) is greater than 0. Under an additional bound on the expectation value of the estimators (specified in the supplementary material)
ES ( T (l) ) ≥ max { m ( 1− 2e−(cBlb (l)−λ) ) , 0 }
(12)
where b(l+1) = Blb(l)−λ, b(1) = B0, and c ∈ (0, 1]. Increasing λ or decreasing B, will decrease the bound in (12), since crossing the threshold is less probable. Depending on λ and {Bl}Ll=1 (specifically that λ ≤ cBlb(l) + ln 2), the bound in (12) is positive, and enforces a non-zero reduction in the GE. Along with Theorem 3, this shows the expected reduction in GE of ISTA networks compared to ReLU netowrks. The reduction is controlled by the value of the soft threshold.
To obtain a more compact relation, we can choose the maximal matrices’ norm B = maxl∈[1,L]Bl, and denote T = minl∈[1,L] T (l) ∈ [0,m] which leads to G ( HLI ) ≤ 2 ( B0B L √ m − λES(T )m BL−1 B−1 ) .
Comparing this bound with the GE bound for ReLU networks presented in Theorem 1, shows the expected reduction due to the soft thresholding activation. This result also implies practical rules for designing low generalization error ISTA networks. We note that the network’s parameters such as the soft-threshold value λ and number of samples m, are predefined by the model being solved (for example, in ISTA, the value of λ is chosen according to the singular values of A).
We derive an implicit design rule from (3), for a nonincreasing GE, as detailed in Section A.2. This is done by restricting the matrices’ norm to satisfy B ≤ 1 + λES(T )√
mB0 . Moreover, these results can
be extended to convolutional neural networks. As convolution operations can be expressed via multiplication with a convolution matrix, the presented results are also satisfied in that case.
Similarly, we bound the GE of the class of functions representing the output at depth L in an ADMM network, HLA. The proof and discussion are provided in the supplementary material. Theorem 4 (Generalization error bound for ADMM networks). Consider the class of learned ADMM networks of depth L as described in (4), and m i.i.d. training samples. Then there exist
T (l) for l ∈ [1, L− 1] in the interval T (l) ∈ [ 0,min { mB̃lG l−1 A λ̃ ,m }] where GlA = B0 ∏l−1 l′=1 B̃l′√ m
− λ̃ m ∑l−2 l′=1 T (l′) ∏l−1 j=l′+1 B̃j − λ̃T (l−1)
m , and λ̃ = (1 + γ)λ, B̃l = (1 + 2γ)(Bl + 2), l ∈ [1, L], such that G ( HLA ) ≤ 2B̃LESGL−1A .
We compare the GE bounds for ISTA and ADMM networks, to the bound on the GE of ReLU networks presented in Theorem 1. We observe that both model-based networks achieve a reduction in the GE, which depends on the soft-threshold, the underlying data distribution, and the bound on the norm of the weight matrices. Following the bound, we observe that the soft-thresholding nonlinearity is most valuable in the case of small number of training samples. The soft-thresholding nonlinearity is the key that enables reducing the GE of the ISTA and ADMM networks compared to ReLU networks. Next, we focus on bounding the EE of feed-forward networks based on the LRC framework.
4 ESTIMATION ERROR BOUNDS: LOCAL PROPERTIES
To investigate the model-based networks’ EE, we use the LRC framework (Bartlett et al., 2005). Instead of considering the entire class of functions H, the LRC considers only estimators which are close to the optimal estimator Hr = { h ∈ H : ED ∥h(y)− h∗(y)∥22 ≤ r } , where h∗ is such that LD(h ∗) = infh∈H LD(h). It is interesting to note that the class of estimators Hr only restricts the distance between the estimators themselves, and not between their corresponding losses. Following the LRC framework, we consider target vectors ranging in [−1, 1]nx . Therefore, we adapt the networks’ estimations by clipping them to lie in the interval [−1, 1]nx . In our case we consider the restricted classes of functions representing the output of a neuron at depth L in ISTA, ADMM, and ReLU networks. Moreover, we denote by W l and W l,∗, l ∈ [1, L] the weight matrices corresponding to h ∈ Hr and h∗, respectively. Based on these restricted class of functions, we present the following assumption and theorem (the proof is provided in the supplementary material). Assumption 1. There exists a constant C ≥ 1 such that for every probability distribution D, and estimator h ∈ H, ED ∑nx j=1(hj − h∗j )2 ≤ CED ∑nx j=1 ( ℓ(hj)− ℓ(h∗j ) ) , where hj and h∗j denote the jth coordinate of the estimators.
As pointed out in (Bartlett & Mendelson, 2002), this condition usually follows from a uniform convexity condition on the loss function ℓ. For instance, if |h(y)− x| ≤ 1 for any h ∈ H,y ∈ Rny and x ∈ Rnx , then the condition is satisfied with C = 1 (Yousefi et al., 2018). Theorem 5 (Estimation error bound of ISTA, ADMM, and ReLU networks). Consider the class of functions represented by depth-L ISTA networks HLI as detailed in Section 2.1, m i.i.d training samples, and a per-coordinate loss satisfying Assumption 1 with a constant C. Let ||W l − W l,∗||∞ ≤ α √ r for some α > 0. Moreover, B ≥ max{α √ r, 1}. Then there exists
T in the interval T ∈ [ 0,min {√ mB0B L−12L λη ,m }] , where η = LB L−1(B−1)−BL+1 (B−1)2 , such that for any s > 0 with probability at least 1− e−s,
E ( HLI ) ≤ 41r∗ + 17C 2 + 48C
mnx s (13)
where r∗ = C2α2 (
B0B L−12L√ m − λTm η )2 . The bound is also satisfied for the class of functions
represented by depth-L ADMM networks HLA, with r∗ = C2α2 ( B0B̃ L−22L−1√ m − λ̃Tm η̃ )2 , where λ̃ = (1 + γ)λ, B̃ = (1 + 2γ)(B + 2), and η̃ = (L−1)B̃ L−2(B̃−1)−B̃L−1+1
(B̃−1)2 , and for the class of functions represented by depth-L ReLU networks HLR, with r∗ = C2α2 ( B0B L−12L√ m )2 .
From Theorem 5, we observe that the EE decreases by a factor of O (1/m), instead of a factor of O (1/ √ m) obtained for the GE. This result complies with previous local bounds which yield faster convergence rates compared to global bounds (Blanchard et al., 2007; Bartlett et al., 2005). Also, the value of α relates the maximal distance between estimators in Hr denoted by r, to the distance between their corresponding weight matrices ||W l −W l,∗||∞ ≤ α √ r, l ∈ [1, L]. Tighter bounds on the distance between the weight matrices allow us to choose a smaller value for α, resulting in smaller values r∗ which improve the EE bounds. The value of α could depend on the network’s nonlinearity, underlying data distribution D, and number of layers. Note that the bounds of the model-based architectures depend on the soft-thresholding through the value of λES(T ). As λES(T ) increases, the bound on the EE decreases, which emphasizes the nonlinearity’s role in the network’s generalization abilities. Due to the soft-thresholding, ISTA and ADMM networks result in lower EE bounds compared to the bound for ReLU networks. It is interesting to note, that as the number of training samples m increases, the difference between the bounds on the model-based and ReLU networks is less significant. In the EE bounds of model-based networks, the parameter B0 relates the bound to the sparsity level ρ, of the target vectors. Lower values of ρ result in lower EE bounds, as demonstrated in the supplementary.
5 NUMERICAL EXPERIMENTS
In this section, we present a series of experiments that concentrate on how a particular model-based network (ISTA network) compares to a ReLU network, and showcase the merits of model-based networks. We focus on networks with 10 layers (similar to previous works (Gregor & LeCun, 2010)), to represent realistic model-based network architectures. The networks are trained on a simulated dataset to solve the problem in (1), with target vectors uniformly distributed in [−1, 1]. The linear mapping A is constructed from the real part of discrete Fourier transform (DFT) matrix rows (Ong et al., 2019), where the rows are randomly chosen. The sparsity rate is ρ = 0.15, and the noise’s standard deviation is 0.1. To train the networks we used the SGD optimizer with the L1 loss over all neurons of the last layer. The target and noise vectors are generated as element wise independently from a uniform distribution ranging in [−1, 1] . All results are reproducible through (Authors, 2022) which provides the complete code to execute the experiments presented in this section.
We concentrate on comparing the networks’ EE, since in practice, the networks are trained with a finite number of examples. In order to empirically approximate the EE of a class of networks H, we use an empirical approximation of h∗ (which satisfy LD(h∗) = infh∈H LD(h)) and the ERM ĥ, denoted by h∗emp and ĥemp respectively. The estimator h ∗ emp results from the trained network with 104 samples, and the ERM is approximated by a network trained using SGD with m training samples (where m ≤ 104). The empirical EE is given by their difference LD(ĥemp)− LD(h∗emp). In Fig. 2, we compare between the ISTA and ReLU networks, in terms of EE and L1 loss, for networks trained with different number of samples (between 10 and 104 samples). We observe that for small number of training samples, the ISTA network substantially reduces the EE compared to the ReLU network. This can be understood from Theorem 5, which results in lower EE bounds on the ISTA networks compared to ReLU networks, due to the term λES(T ). However, for large number of samples the EE of both networks decreases to zero, which is also expected from Theorem 5. This highlights that the contribution of the soft-thresholding nonlinearity to the generalization abilities of the network is more significant for small number of training samples. Throughout the paper, we considered networks with constant bias terms. In this section, we also consider learned bias terms, as detailed in the supplementary material. In Fig. 2, we present the experimental results for networks with constant and learned biases. The experiments indicate that the choice of constant or learned bias is less significant to the EE or the accuracy, compared to the choice of nonlinearity, emphasizing the relevance of the theoretical guarantees. The cases of learned and constant biases have different optimal estimators, as the networks with learned biases have more learned parameters. As a result, it is plausible a network with more learnable parameters (the learnable bias) exhibits a lower estimation error since the corresponding optimal estimator also exhibits a lower estimation error. To analyze the effect of the soft-thresholding value on the generalization abilities of the ISTA network, we show in Fig. 3, the empirical EE for multiple values of λ. The experimental results demonstrate that for small number of samples, increasing λ reduces the EE. As expected from the EE bounds, for a large number of training samples, this dependency on the nonlinearity vanishes, and the EE is similar for all values
of λ. In Fig. 3b, we show the L1 loss of the ISTA networks for different values of λ. We observe that low estimation error does not necessarily lead to low loss value. For m ≲ 100, increasing λ reduced the EE. These results suggest that given ISTA networks with different values of λ such that all networks achieve similar accuracy, the networks with higher values of λ provide lower EE. These results are also valid for additional networks’ depths. In Section B, we compare the EE for networks with different number of layers, and show that they exhibit a similar behaviour.
6 CONCLUSION
We derived new GE and EE bounds for ISTA and ADMM networks, based on the RC and LRC frameworks. Under suitable conditions, the model-based networks’ GE is nonincreasing with depth, resulting in a substantial improvement compared to the GE bound of ReLU networks. The EE bounds explain EE behaviours experienced in practice, such as ISTA networks demonstrating higher estimation abilities, compared to ReLU networks, especially for small number of training samples. Through a series of experiments, we show that the generalization abilities of ISTA networks are controlled by the soft-threshold value, and achieve lower EE along with a more accurate recovery compared to ReLU networks which increase the GE and EE with the networks’ depths.
It is interesting to consider how the theoretical insights can be harnessed to enforce neural networks with high generalization abilities. One approach is to introduce an additional regularizer during the training process that is rooted in the LRC, penalizing networks with high EE (Yang et al., 2019). | 1. What is the focus and contribution of the paper on model-based networks?
2. What are the strengths of the proposed approach, particularly in terms of generalization capabilities and estimation error bounds?
3. What are the weaknesses of the paper regarding the specific model considered and the numerical experiments conducted?
4. How does the learned bias setting affect the performance of ISTA and ReLU networks?
5. Can you provide more information about the interpretation of the bound presented in theorem 5, specifically the meaning of r*?
6. Why does learning biases seem to improve the EE of ReLU networks while degrading the EE of ISTA networks?
7. Are there any limitations or potential improvements for future research regarding the application of model-based networks to different data models? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper studies model-based networks in the context of sparse coding. A model-based network is constructed by unfolding an iterative algorithm, i.e. mapping each iteration of the original algorithm to a layer of the resulting neural network. This allows to have a trainable neural network that can adapt to specific problems, while encoding some stuctural information of the original iterative algorithm.
The main goal of the paper is to provide rigorous bounds on the generalisation capabilities of two model-based networks inspired by two specific sparse coding algorithms, namely "iterative shrinkage and thresholding algorithm" (ISTA) and "alternating direction method of multipliers" (ADMM). Moreover, the authors aim to compare model-based neural networks to generic fully-connected ReLU networks in the context of these bounds.
The authors prove two sets of bounds for fully-connected ReLU networks and ISTA/ADMM-based neural networks:
they bound the generalisation error, intended as the gap between empirical and population loss of the worst-performing function in the model class, averaged over training datasets. They find that the bound for model-based networks is better than that of ReLU networks, and this is crucially linked to the effect that component-wise soft-thresholding has on the Rademacher complexity of a given model class.
they bound the estimation error, intended as the gap between the population loss of the ERM minimiser and the population loss of the best performing function in the model class. Again, model-based networks enjoy a better bound on estimation error than ReLU networks, and again this is linked to the effect of soft-thresholding on the local Rademacher complexity of a given model class.
The authors then study numerically the estimation error of ISTA and ReLU networks, finding that ISTA networks perform better. They also show that the dependence of the estimation error on the scale parameter of the soft-thresholding non-linearity is the same as that predicted by the bounds.
Questions (see also the weak points section):
how does the specific model considered (eq 1) affect the results?
are all numerical results robust to a change in sparsity, noise level and batch size? Moreover, what is the batch size used in the numerical experiments?
the numerical experiments mention a "learned bias" setting that is not accurately described in the main text or in the supplementary material. What is that?
Minor points:
the authors could add direct links to the proofs of their lemmas and theorems after stating them.
the statement of assumption 1 is not completely clear to me. Is there some notation change or notation missing?
if at all relevant, the authors could add the performance of the original ISTA algorithm to their numerical experiments as a baseline performance. If not relevant, why is it not relevant?
the operative definition of EE (eq 7) is not completely clear. h_hat is the empirical risk minimiser, so it depends on a training set, is this correct? If so, is there some averaging missing?
Curiosities (not relevant for the review process as far as I am concerned):
is there a natural interpretation to why soft-thresholding lowers the Rademacher complexity of model classes?
the bound presented in theorem 5 is the same for ReLU, ISTA and ADMM based networks. Can one give a "physical" interpetation to r*? In some sense, r* encodes some information about the network architecture that could determine some other property and not only the EE bound.
is there an explanation to the fact that learning biases seems to improve the EE of ReLU networks while degrading the EE of ISTA networks?
Strengths And Weaknesses
Strong points:
the bounds that the authors provide seem to be non-vacuous, which I find non-trivial. Indeed, the bounding terms suggests, for example, that the estimation error depends on the soft-thresholding scale parameter, and this is confirmed numerically.
I could not check the correctness of the proofs given the short time-frame of this review process. Nonetheless, the relevant appendices are very detailed and I feel that the interested reader could check the proof details easily if needed. Also, the results as well as the overall proof techinque seem reasonable.
The paper is extremely well written and clear. I appreciated particularly that formal mathematical concepts were always explained and justified in the discussion text surrounding them, allowing a much broader audience to understand the results.
Weak points:
it is not completely clear what role the specific model that the authors consider (eq. 1) plays in the various theorems. This doesn't allow the reader to fully gauge the applicability of the results to different data models.
(minor weak point) the numerical experiments have been performed fixing the sparsity level of the signal to be retrieved, the noise level and, I suppose, the batch size of SGD. It is not clear whether the numerical results are robust across the choice of these parameters.
(minor weak point) the numerical experiments mention a "learned bias" setting that is not accurately described in the main text or in the supplementary material.
Clarity, Quality, Novelty And Reproducibility
The paper is clearly written, and besides some minor weak points it seems to respect the common standards of scientific quality, included reproducibility of proofs and numerical experiments. Up to my limited knowledge of the relevant literature, the paper seems original. |
ICLR | Title
Modeling treatment events in disease progression
Abstract
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression. However, to our knowledge, most of existing methods for estimating trajectories do not explicitly account for treatment events such as surgery in-between observations, which dramatically decreases their adequacy for clinical practice. In this study, we develop a first machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events. CSI is easy to implement and is guaranteed to converge to the global minimum of the corresponding optimization problem. Experimental results also demonstrate the effectiveness of CSI using both simulated and real datasets.
1 INTRODUCTION
The course of disease progression in individual patients is one of the biggest uncertainties in medical practice. In an ideal world, accurate, continuous assessment of a patient’s condition helps with prevention and treatment. However, many medical tests are either harmful or inconvenient to perform frequently, and practitioners have to infer the development of disease from sparse, noisy observations.
In its simplest form, the problem of modeling disease progressions is to fit the curve of y(t), t ∈ [tmin, tmax] for each patient, given sparse observations y := (ỹ(t1), . . . , ỹ(tn)). Due to the highdimensional nature of longitudinal data, existing results usually restrict solutions to subspace of functions and utilize similarities between patients via enforcing low-rank structures. One popular approach is the mixed effect models, including Gaussian process approaches (Verbeke, 1997; Zeger et al., 1988) and functional principal components (James et al., 2000). While generative models are commonly used and have nice theoretical properties, their result could be sensitive to the underlying distributional assumptions of observed data and hard to adapt to different applications. Another line of research is to pose the problem of disease progression estimation as an optimization problem. Kidzinski and Hastie. Kidziński & Hastie (2018) proposed a framework which formulates the problem as a matrix completion problem and solve it using matrix factorization techniques. This method is distribution-free and flexible to possible extensions.
Meanwhile, both types of solutions model the natural progression of disease using observations of the targeted variables only. They fail to incorporate the existence and effect of human interference: medications, therapies, surgeries, etc. Two patients with similar symptoms initially may have different futures if they choose different treatments. Without that information, predictions can be way-off.
To the best of our knowledge, existing literature talks little about modeling treatment effect on disease progression. In Kidziński & Hastie (2018), authors use concurrent observations of auxillary variables (e.g. oxygen consumption to motor functions) to help estimate the target one, under the assumption that both variables reflect the intrinsic latent feature of the disease and are thus correlated. Treatments of various types, however, rely on human decisions and to some extent, an exogenous variable to the development of disease. Thus they need to modeled differently.
In this work, we propose a model for tracking disease progression that includes the effects of treatments. We introduce the Coordinatewise-Soft-Impute (CSI) algorithm for fitting the model and investigate its theoretical and practical properties. The contribution of our work is threefold: First, we propose a model and an algorithm CSI, to estimate the progression of disease which incorporates the effect of treatment events. The framework is flexible, distribution-free, simple to implement and generalizable. Second, we prove that CSI converges to the global solution regardless of the initialization. Third, we compare the performance of CSI with various other existing methods on both simulated data and a dataset of Gillette Children’s Hospital with patients diagnosed with Cerebral Palsy, and demonstrate the superior performances of CSI.
The rest of the paper is organized as follows. In Section 2 we state the problem and review existing methods. Next, in Section 3 we describe the model and the algorithm. Theoretic properties of the algorithm are derived in Section 4. Finally, in Section 5 and 6 we provides empirical results of CSI on the simulated and the real datesets respectively. We discuss some future directions in Section 7.
2 PROBLEM STATEMENT AND RELATED WORK
Let y(t) be the trajectory of our objective variable, such as the size of tumor, over fixed time range t ∈ [tmin, tmax], and N be the number of patients. For each patient 1 ≤ i ≤ N , we measure its trajectory yi(t) at ni irregularly time points ti = [ti,1, ti,2, ..., ti,ni ]
′ and denote the results as yi = [yi,1, ..., yi,ni ] ′ = [yi(ti,1), ..., yi(ti,ni)] ′. We are primarily interested in estimating the disease progression trajectories {yi(t)}Ni=1 of all N patients, based on observation data {(ti,yi)}Ni=1. To fit a continuous curve based on discrete observations, we restrict our estimations to a finitedimensional space of functions. Let {bi, i ∈ N} be a fixed basis of L2([tmin, tmax]) (e.g. splines, Fourier basis) and b = {bi : 1 ≤ i ≤ K} be first K dimensions of it. The problem of estimating yi(t) can then be reduced to the problem of estimating the coefficients wi = [wi,1, wi,2, · · · , wi,K ]′ such that w′ib(t) is close to yi(t) at time t ∈ ti. Though intuitive, the above method has two main drawbacks. First, when the number of observations per patient is less than or equal to the number of basis functions K, we can perfectly fit any curve without error, leading to overfitting. Moreover, this direct approach ignores the similarities between curves. Different patients may share similar trend of the trajectories which could potentially imporve the prediction. Below we describe two main lines of research improving on this, the mixed-effect model and the matrix completion model.
2.1 LINEAR MIXED-EFFECT MODEL
In mixed-effect models, every trajectory yi(t) is assumed to be composed of two parts: the fixed effect µ(t) = m′b(t) for some m ∈ RK that remains the same among all patients and a random effect wi ∈ RK that differs for each i ∈ {1, . . . , N}. In its simplest form, we assume
wi ∼ N (0,Σ) and yi|wi ∼ N (µi +Biwi, σ2Ini), where Σ is the K × K covariance matrix, σ is the standard deviation and µi = [µ(ti,1), µ(ti,2), · · ·µ(ti,ni)]′, Bi = [b(ti,1),b(ti,2), · · · ,b(ti,ni)]′ are functions µ(t) and b(t) evaluated at the times ti, respectively. Estimations of model parameters µ,Σ can be made via expectation maximization (EM) algorithm (Laird & Ware, 1982). Individual coefficients wi can be estimated using the best unbiased linear predictor (BLUP) (Henderson, 1975).
In linear mixed-effect model, each trajectory is estimated with |wi| = K degrees of freedom, which can still be too complex when observations are sparse. One typical solution is to assume a low-rank structure of the covariance matrix Σ by introducing a contraction mapping A from the functional basis to a low-dimensional latent space. More specifically, one may rewrite the LMM model as
yi|w̃i ∼ N (µi +BiAw̃i, σ2Ini), where A is a K × q matrix with q < K and w̃i ∈ Rq is the new, shorter random effect to be estimated. Methods based on low-rank approximations are widely adopted and applied in practice and different algorithms on fitting the model have been proposed (James et al., 2000; Lawrence, 2004; Schulam & Arora, 2016). In the later sections, we will compare our algorithm with one specific implementation named functional-Principle-Component-Analysis (fPCA) (James et al., 2000), which uses EM algorithm for estimating model parameters and latent variables wi.
2.2 MATRIX COMPLETION MODEL
While the probabilistic approach of mixed-effect models offers many theoretical advantages including convergence rates and inference testing, it is often sensitive to the assumptions on distributions, some of which are hard to verify in practice. To avoid the potential bias of distributional assumptions in mixed-effect models, Kidzinski and Hastie (Kidziński & Hastie, 2018) formulate the problem as a sparse matrix completion problem. We will review this approach in the current section.
To reduce the continuous-time trajectories into matrices, we discretize the time range [tmin, tmax] into T equi-distributed points G = [τ1, . . . , τT ] with τ1 = tmin, τT = tmax and let B = [b(τ1),b(τ2), · · · ,b(τT )]′ ∈ RT×K be the projection of the K-truncated basis b onto grid G. The N ×K observation matrix Y is constructed from the data {(ti,yi)}Ni=1 by rounding the time ti,j of every observation yi(ti,j) to the nearest time grid and regarding all other entries as missing values. Due to sparsity, we assume that no two observation yi(ti,j)’s are mapped to the same entry of Y .
Let Ω denote the set of all observed entries of Y . For any matrix A, let PΩ(A) be the projection of A onto Ω, i.e. PΩ(A) = M where Mi,j = Ai,j for (i, j) ∈ Ω and Mi,j = 0 otherwise. Similarly, we define P⊥Ω (A) = A− PΩ(A) to be the projection on the complement of Ω. Under this setting, the trajectory prediction problem is reduced to the problem of fitting a N ×K matrix W such that WB′ ≈ Y on observed indices Ω. The direct way of estimating W is to solve the optimization problem
arg min W
1 2 ‖PΩ(Y −WB′)‖2F , (2.1)
where ‖ · ‖F is the Fröbenius norm. Again, if K is larger than the number of observations for some subject we will overfit. To avoid this problem we need some additional constraints on W . A typical approach in the matrix completion community is to introduce a nuclear norm penalty—a relaxed version of the rank penalty while preserving convexity (Rennie & Srebro, 2005; Candès & Recht, 2009). The optimization problem with the nuclear norm penalty takes form
arg min W
1 2 ‖PΩ(Y −WB′)‖2F + λ‖W‖∗, (2.2)
where λ > 0 is the regularization parameter, ‖ · ‖F is the Fröbenius norm, and ‖ · ‖∗ is the nuclear norm, i.e. the sum of singular values. In Kidziński & Hastie (2018), a Soft-Longitudinal-Impute (SLI) algorithm is proposed to solve (2.2) efficiently. We refer the readers to Kidziński & Hastie (2018) for detailed description of SLI while noting that it is also a special case of our algorithm 1 defined in the next section with µ fixed to be 0.
3 MODELING TREATMENT IN DISEASE PROGRESSION
In this section, we introduce our model on effect of treatments in disease progression.
A wide variety of treatments with different effects and durations exist in medical practice and it is impossible to build a single model to encompass them all. In this study we take the simplified approach and regard treatment, with the example of one-time surgery in mind, as a non-recurring event with an additive effect on the targeted variable afterward. Due to the flexibility of formulation of optimization problem (2.1), we build our model based on matrix completion framework of Section 2.2.
More specifically, let s(i) ∈ G be the time of treatment of the i’th patient, rounded to the closest τk ∈ G (s(i) =∞ if no treatment is performed). We encode the treatment information as a N × T zero-one matrix IS , where (IS)i,j = 1 if and only τj ≥ s(i), i.e. patient i has already taken the treatment by time τj . Each row of IS takes the form of (0, · · · , 0, 1, · · · , 1). Let µ denote the average additive effect of treatment among all patients. In practice, we have access to the sparse observation matrix Y and surgery matrix IS and aim to estimate the treatment effect µ and individual coefficient matrix W based on Y, IS and the fixed basis matrix B such that WB′ + µIS ≈ Y . Again, to avoid overfitting and exploit the similarities between individuals, we add a penalty term on the nuclear norm of W . The optimization problem is thus expressed as:
arg min µ,W
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗, (3.1)
for some λ > 0.
3.1 COORDINATEWISE-SOFT-IMPUTE (CSI) ALGORITHM
Though the optimization problem (3.1) above does not admit an explicit analytical solution, it is not hard to solve for one of µ or W given the other one. For fixed µ, the problem reduces to the optimization problem (2.2) with Ỹ = Y − µIS and can be solved iteratively by the SLI algorithm Kidziński & Hastie (2018), which we will also specify later in Algorithm 1. For fixed W , we have
arg min µ
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗
= arg min µ
1 2 ‖PΩ(−WB′ − µIS)‖2F = arg min
µ
1
2 ∑ (i,j)∈Ω∩ΩS ((Y −WB′)i,j − µ)2, (3.2)
where ΩS is the set of non-zero indices of IS . Optimization problem (3.2) can be solved by taking derivative with respect to µ directly, which yields
µ̂ =
∑ (i,j)∈Ω∩ΩS (Y −WB ′)i,j
|Ω ∩ ΩS | . (3.3)
The clean formulation of (3.3) motivates us to the following Coordinatewise-Soft-Impute (CSI) algorithm (Algorithm 1): At each iteration, CSI updates Wnew from (Wold, µold) via soft singular value thresholding and then updates µnew from (Wnew, µold) via (3.3), finally it replaces the missing values of Y based (Wnew, µnew). In the definition, we define operator Sλ as for any matrix X , Sλ(X) := UDλV , where X = UDV is the SVD of X and Dλ = diag((max{di − λ, 0})Ki=1) is derived from the diagonal matrix D = diag((di)Ki=1). Note that if we set µ ≡ 0 throughout the updates, then we get back to our base model SLI without treatment effect.
Algorithm 1: COORDINATEWISE-SOFT-IMPUTE 1. Initialize Wold ← all-zero matrix, µold ← 0. 2. Repeat:
(a) Compute Wnew ← Sλ((PΩ(Y − µoldIS) + P⊥Ω (WoldB′))B); (b) Compute µnew ← ∑ (i,j)∈Ω∩ΩS (Y−WnewB′)i,j
|Ω∩ΩS | ; (c) If max {
(µnew−µold)2 µ2old , ‖Wnew−Wold‖2F ‖Wold‖2F
} < ε, exit;
(d) Assign Wold ←Wnew, µold ← µnew. 3. Output Ŵλ ←Wnew, µ̂λ ← µnew.
4 CONVERGENCE ANALYSIS
In this section we study the convergence properties of Algorithm 1. Fix the regularization parameter λ > 0, let (µ(k)λ ,W (k) λ ) be the value of (µ,W ) in the k’th iteration of the algorithm, the exact definition of which is provided below in (4.4). We prove that Algorithm 1 reduces the loss function at each iteration and eventually converges to the global minimizer.
Theorem 1. The sequence (µ(k)λ ,W (k) λ ) converges to a limit point (µ̂λ, Ŵλ) which solves the optimization problem:
(µ̂λ, Ŵλ) = arg min µ,W
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗.
Moreover, (µ̂λ, Ŵλ) satisfies that Ŵλ = Sλ((PΩ(Y − µ̂λIS) + P⊥Ω (ŴλB′))B), µ̂λ = ∑ (i,j)∈Ω∩ΩS (Y − ŴλB ′)i,j
|Ω ∩ ΩS | . (4.1)
The proof of Theorem 1 relies on five technique Lemmas stated below. The detailed proofs of the lemmas and the proof to Theorem 1 are provided in Appendix A. The first two lemmas are on properties of the nuclear norm shrinkage operator Sλ defined in Section 3.1.
Lemma 1. Let W be an N × K matrix and B is an orthogonal T × K matrix of rank K. The solution to the optimization problem minW 12‖Y −WB
′‖2F + λ‖W‖∗ is given by Ŵ = Sλ(Y B) where Sλ(Y B) is defined in Section 3.1.
Lemma 2. Operator Sλ(·) satisfies the following inequality for any two matrices W1, W2 with matching dimensions:
‖Sλ(W1)− Sλ(W2)‖2F ≤ ‖W1 −W2‖2F .
Define
fλ(W,µ) = 1
2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗, (4.2)
Qλ(W |W̃ , µ) = 1
2 ‖PΩ(Y − µIS) + P⊥Ω (W̃B′)−WB′‖2F + λ‖W‖∗. (4.3)
Lemma 1 shows that in the k-th step of Algorithm 1, W (k)λ is the minimizer for function Qλ(·|W (k−1), µ(k)). The next lemma proves the sequence of loss functions fλ(W (k)λ , µ (k) λ ) is monotonically decreasing at each iteration.
Lemma 3. For every fixed λ ≥ 0, the k’th step of the algorithm (µ(k)λ ,W (k) λ ) is given by
W (k) λ = arg minW Qλ(W |W (k−1)λ , µ (k−1) λ ) µ (k) λ =
∑ (i,j)∈Ω∩ΩS (Y −W (k) λ B ′)i,j
|Ω ∩ ΩS | . (4.4)
Then with any starting point (µ(0)λ ,W (0) λ ), the sequence {(µ (k) λ ,W (k) λ )}k satisfies
fλ(W (k) λ , µ (k) λ ) ≤ fλ(W (k) λ , µ (k−1) λ ) ≤ Qλ(W (k) λ |W (k−1) λ , µ (k−1) λ ) ≤ fλ(W (k−1) λ , µ (k−1) λ ).
The next lemma proves that differences (µk − µk−1)2 and ‖W (k)λ −W (k−1) λ ‖2F both converge to 0.
Lemma 4. For any positive integer k, we have ‖W (k+1)λ − W (k) λ ‖2F ≤ ‖W (k) λ − W (k−1) λ ‖2F . Moreover,
µ (k+1) λ − µ (k) λ → 0, W (k+1 λ −W (k) λ → 0 as k →∞.
Finally we show that if the sequence {(µ(k)λ ,W (k) λ )}k, it has to converge to a solution of (4.1).
Lemma 5. Any limit point (µ̂λ, Ŵλ) of sequences {(µ(k)λ ,W (k) λ )}k satisfies (4.1).
5 SIMULATION STUDY
In this section we illustrate properties of our Coordinatewise-Soft-Impute (CSI) algorithm via simulation study. The simulated data are generated from a mixed-effect model with low-rank covariance structure on W : Y = WB + µIS + E , for which the specific construction is deferred to Appendix B. Below we discuss the evaluation methods as well as the results from simulation study.
5.1 METHODS
We compare the Coordinatewise-Soft-Impute (CSI) algorithm specified in Algorithm 1 with the vanilla algorithm SLI (corresponding to µ̂ = 0 in our notation) defined in Kidziński & Hastie (2018) and the fPCA algorithm defined in James et al. (2000) based on mixed-effect model. We train all three algorithms on the same set of basis functions and choose the tuning parameters λ (for CSI and
SLI) and R (for fPCA) using a 5-fold cross-validation. Each model is then re-trained using the whole training set and tested on a held-out test set Ωtest consisting 10% of all data.
The performance is evaluated in two aspects. First, for different combinations of the treatment effect µ and observation density ρ, we train each of the three algorithms on the simulated data set, and compute the relative squared error between the ground truth µ and estimation µ̂., i.e., RSE(µ̂) = (µ̂− µ)2/µ2. Meanwhile, for different algorithms applied to the same data set, we compare the mean square error between observation Y and estimation Ŷ over test set Ωtest, namely,
MSE(Ŷ ) = 1 |Ωtest| ∑
(i,j)∈Ωtest
(Yij − Ŷij)2 = 1
|Ωtest| ‖PΩtest(Y )− PΩtest(Ŷ )‖2F (5.1)
We train our algorithms with all combinations of treatment effect µ ∈ {0, 0.2, 0.4, · · · , 5}, observation rate ρ ∈ {0.1, 0.3, 0.5}, and thresholding parameter λ ∈ {0, 1, · · · , 4} (for CSI or SLI) or rank R ∈ {2, 3, · · · , 6} (for fPCA). For each fixed combination of parameters, we implemented each algorithm 10 times and average the test error.
5.2 RESULTS
The results are presented in Table 1 and Figure 1. From Table 1 and the left plot of Figure 1, we have the following findings:
1. CSI achieves better performance than SLI and fPCA, regardless of the treatment effect µ and observation rate ρ. Meanwhile SLI performs better than fPCA.
2. All three methods give comparable errors for smaller values of µ. In particular, our introduction of treatment effect µ does not over-fit the model in the case of µ = 0.
3. As the treatment effect µ increases, the performance of CSI remains the same whereas the performances of SLI and fPCA deteriorate rapidly. As a result, CSI outperforms SLI and fPCA by a significant margin for large values of µ. For example, when ρ = 0.1, the MSE(Ŷ ) of CSI decreases from 72.3% of SLI and 59.6% of fPCA at µ = 1 to 12.4% of SLI and 5.8% of fPCA at µ = 5.
4. All three algorithms suffer a higher MSE(Ŷ ) with smaller observation rate ρ. The biggest decay comes from SLI with an average 118% increase in test error from ρ = 0.5 to ρ = 0.1. The performances of fPCA and CSI remains comparatively stable among different observation rate with a 6% and 12% increase respectively. This implies that our algorithm is tolerant to low observation rate.
To further investigate CSI’s ability to estimate µ, we plot the relative squared error of µ̂ using CSI with different observation rate in the right plot of Figure 1. As shown in Figure 1, regardless of the choice of observation rate ρ and treatment effect µ, RSE(µ̂) is always smaller than 1% and most of the estimations achieves error less than 0.1%. Therefore we could conclude that, even for sparse matrix Y , the CSI algorithm could still give very accurate estimate of the treatment effect µ.
6 DATA STUDY
In this section, we apply our methods to real dataset on the progression of motor impairment and gait pathology among children with Cerebral Palsy (CP) and evaluate the effect of orthopaedic surgeries.
Cerebral palsy is a group of permanent movement disorders that appear in early childhood. Orthopaedic surgery plays a major role in minimizing gait impairments related to CP (McGinley et al.,
2012). However, it could be hard to correctly evaluate the outcome of a surgery. For example, the seemingly positive outcome of a surgery may actually due to the natural improvement during puberty. Our objective is to single out the effect of surgeries from the natural progression of disease and use that extra piece of information for better predictions.
6.1 DATA AND METHOD
We analyze a data set of Gillette Children’s Hospital patients, visiting the clinic between 1994 and 2014, age ranging between 4 and 19 years, mostly diagnosed with Cerebral Palsy. The data set contains 84 visits of 36 patients without gait disorders and 6066 visits of 2898 patients with gait pathologies. Gait Deviation Index (GDI), one of the most commonly adopted metrics for gait functionalities (Schwartz & Rozumalski, 2008), was measured and recorded at each clinic visit along with other data such as birthday, subtype of CP, date and type of previous surgery and other medical results.
Our main objective is to model individual disease progression quantified as GDI values. Due to insufficiency of data, we model surgeries of different types and multiple surgeries as a single additive effect on GDI measurements following the methodology from Section 3. We test the same three methods CSI, SLI and fPCA as in Section 5, and compare them to two benchmarks—the population mean of all patients (pMean) and the average GDI from previous visits of the same patient (rMean).
All three algorithms was trained on the spline basis of K = 9 dimensions evaluated at a grid of T = 51 points, with regularization parameters λ ∈ {20, 25, ..., 40} for CSI and SLI and rank constraints r ∈ {2, . . . , 6} for fPCA. To ensure sufficient observations for training, we cross validate and test our models on patients with at least 4 visits and use the rest of the data as a common training set. The effective size of 2-fold validation sets and test set are 5% each. We compare the result of each method/combination of parameters using the mean square error of GDI estimations on held-out entries as defined in (5.1).
6.2 RESULTS
We run all five methods on the same training/validation/test set for 40 times and compare the mean and sd of test-errors. The results are presented in Table 2 and Figure 2. Compared with the null model pMean (Column 2 of Table 2), fPCA gives roughly the same order of error; CSI, SLI and rowMean provide better predictions, achieving 62%, 66% and 73% of the test errors respectively. In particular, our algorithm CSI improves the result of vanilla model SLI by 7%, it also provide a stable estimation with the smallest sd across multiple selections of test sets.
We take a closer look at the low-rank decomposition of disease progression curves provided by algorithms. Fix one run of algorithm CSI with λ? = 30, there are 6 non-zero singular value vectors, which we will refer as principal components. We illustrate the top 3 PCs scaled with corresponding singular values in Figure 3a. The first PC recovers the general trend that gait disorder develops
through age 1-10 and partially recovers during puberty. The second and third PC reflects fluctuations during different periods of child growth. By visual inspection, similar trends can be find in the top components of SLI and fPCA as well.
An example of predicted curve from patient ID 5416 is illustrated in Figure 3b , where the blue curve represents the prediction without estimated treatment effect µ̂ = 4.33, green curve the final prediction and red dots actual observations. It can be seen that the additive treatment effect helps to model the sharp difference between the exam before exam (first observation) and later exams.
7 CONCLUSION AND FUTURE WORK
In this paper, we propose a new framework in modeling the effect of treatment events in disease progression and prove a corresponding algorithm CSI. To the best of our knowledge, it’s the first comprehensive model that explicitly incorporates the effect of treatment events. We would also like to mention that, although we focus on the case of disease progression in this paper, our framework is quite general and can be used to analyze data in any disciplines with sparse observations as well as external effects.
There are several potential extensions to our current framework. Firstly, our framework could be extended to more complicated settings. In our model, treatments have been characterized as the binary matrix IS with a single parameter µ. In practice, each individual may take different types of surgeries for one or multiple times. Secondly, the treatment effect may be correlated with the latent variables of disease type, and can be estimated together with the random effect wi. Finally, our framework could be used to evaluate the true effect of a surgery. A natural question is: does surgery really help? CSI provides estimate of the surgery effect µ, it would be interesting to design certain statistical hypothesis testing/casual inference procedure to answer the proposed question.
Though we are convinced that our work will not be the last word in estimating the disease progression, we hope our idea is useful for further research and we hope the readers could help to take it further.
A PROOFS
Proof of Lemma 1. Note that the solution of the optimization problem
min A
1 2 ‖Z −A‖2F + λ‖A‖∗ (A.1)
is given by  = Sλ(Z) (see Cai et al. (2010) for a proof). Therefore it suffices to show the minimizer of the optimization problem (A.1) is the same as the minimizer of the following problem:
min W
1 2 ‖Y B −W‖2F + λ‖W‖∗.
Using the fact that ‖A‖2F = Tr(AA′) and B′B = IK , we have
arg min W
1 2 ‖Y B −W‖2F + λ‖W‖∗ = arg min W 1 2 (Tr(Y BB′Y ′) + Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 (Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗.
On the other hand
arg min W
1 2 ‖Y −WB′‖2F + λ‖W‖∗ = arg min W 1 2 (Tr(Y Y ′) + Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 (Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 ‖Y B −W‖2F + λ‖W‖∗
= Sλ(Y B),
as desired.
Proof of Lemma 2. We refer the readers to the proof in Mazumder et al. (2010, Section 4, Lemma 3).
Proof of Lemma 3. First we argue that µ(k)λ = arg minµ fλ(W (k) λ , µ) and the first inequality immediately follows. We have
arg min µ fλ(W (k) λ , µ) = arg minµ ‖PΩ(Y −W (k)λ B ′ − µIS)‖2F
= arg min µ ∑ (i,j)∈Ω∩ΩS ((Y −W (k)λ B ′)i,j − µ)2.
Taking derivative with respect to µ directly gives µ(k)λ = arg minµ fλ(W (k) λ , µ), as desired.
For the rest two inequalities, notice that
fλ(W (k) λ , µ (k−1) λ ) =
1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k−1)λ IS)‖ 2 F + λ‖W (k) λ ‖∗
≤ 1 2 ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′)−W (k)λ B ′‖2F + λ‖W (k) λ ‖∗ (A.2)
= Qλ(W (k) λ |W (k−1) λ , µ (k−1) λ ) ≤ Qλ(W (k−1)λ |W (k−1) λ , µ (k−1) λ ) (A.3)
= 1 2 ‖PΩ(Y −W (k−1)λ B ′ − µ(k−1)λ IS)‖ 2 F + λ‖W (k−1) λ ‖∗ = fλ(W (k−1) λ , µ (k−1) λ ).
Here the (A.2) holds because we have
1 2 ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′)−W (k)λ B ′‖2F
= 1 2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′) + P⊥Ω (W (k−1) λ B ′ −W (k)λ B ′)‖2F = 1
2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′)‖2F + 1 2 ‖P⊥Ω (W (k−1) λ B ′ −W (k)λ B ′)‖2F
≥1 2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′)‖2F .
(A.3) follows from the fact that W (k)λ = arg minW Qλ(W |W (k−1) λ , µ (k−1) λ ).
Proof of Lemma 4. First we analyze the behavior of {µ(k)λ },
fλ(W (k) λ , µ (k−1) λ )− fλ(W (k) λ , µ (k) λ ) =
1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k−1)λ IS)‖ 2 F
− 1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k)λ IS)‖ 2 F = |S ∩ ΩS |
2 (µ
(k) λ − µ (k−1) λ ) 2.
Meanwhile, the sequence (· · · , fλ(W (k−1)λ , µ (k−1) λ ), fλ(W (k) λ , µ (k−1) λ ), fλ(W (k) λ , µ (k) λ ), · · · ) is decreasing and lower bounded by 0 and therefore converge to a non-negative number, yielding the differences fλ(W (k) λ , µ (k−1) λ )− fλ(W (k) λ , µ (k) λ )→ 0 as k →∞. Hence
µ (k) λ − µ (k−1) λ → 0, (A.4)
as desired.
The sequence {W (k)λ } is slightly more complicated, direct calculation gives
‖W (k)λ −W (k−1) λ ‖ 2 F = ‖Sλ(PΩ(Y − µ (k−1) λ IS) + P ⊥ Ω (W (k−1) λ B ′))
− Sλ(PΩ(Y − µ(k−2)λ IS) + P ⊥ Ω (W (k−2) λ B ′))‖2F ≤ ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′) (A.5) − PΩ(Y − µ(k−2)λ IS)− P ⊥ Ω (W (k−2) λ B
′)‖2F = |Ω ∩ ΩS |(µ(k−1)λ − µ (k−2) λ ) 2 + ‖P⊥Ω (W (k−1) λ B ′ −W (k−2)λ B ′)‖2F , (A.6)
where (A.5) follows from Lemma 2, (A.6) can be derived pairing the 4 terms according to PΩ and P⊥Ω .
By definition of µ(k)λ , we have
|Ω ∩ ΩS |(µ(k−1)λ − µ (k−2) λ ) 2 = 1
|Ω ∩ ΩS | ∑ (i,j)∈Ω∩ΩS (W (k−1) λ B ′ −W (k−2)λ B ′)i,j 2
≤ ‖PΩ(W (k−1)λ B ′ −W (k−2)λ B ′)‖2F , (A.7)
where (A.7) follows from the Cauchy-Schwartz inequality.
Combining (A.6) with (A.7), we get
‖W (k)λ −W (k−1) λ ‖ 2 F ≤ ‖W (k−1) λ B ′ −W (k−2)λ B ′‖2F = ‖W (k−1) λ −W (k−2) λ ‖ 2 F .
Now we are left to prove that the difference sequence {W (k)λ −W (k−1) λ } converges to zero. Combining (A.4) and (A.7) it suffices to prove that ‖P⊥Ω (W (k−1) λ B ′ −W (k−2)λ B′)‖2F → 0. We have
fλ(W (k−1) λ , µ (k−2) λ )−Qλ(W (k−1) λ |W (k−2) λ , µ (k−2) λ ) = −‖P ⊥ Ω (W (k−1) λ B ′ −W (k−2)λ B ′)‖2F ,
and the left hand side converges to 0 because
0 ≥ fλ(W (k−1)λ , µ (k−2) λ )−Qλ(W (k−1) λ |W (k−2) λ , µ (k−2) λ )
≥ fλ(W (k−2)λ , µ (k−2) λ )− fλ(W (k−1) λ , µ (k−2) λ )→ 0,
which completes the proof.
Proof of Lemma 5. Let (µ(mk)λ ,W (mk) λ )→ (µ̂λ, Ŵλ), then Lemma 4 gives (µ (mk−1) λ ,W (mk−1) λ )→ (µ̂λ, Ŵλ). Since we have
W (mk) λ = Sλ((PΩ(Y − µ (mk−1) λ IS) + P ⊥ Ω (W (mk−1) λ B ′))B),
µ (mk) λ =
∑ (i,j)∈Ω∩ΩS (Y −W (mk−1) λ B ′)i,j
|Ω ∩ ΩS | .
Taking limits on both sides gives us the desire result.
Proof of Theorem 1. Let (µ̂λ, Ŵλ) be one limit point then we have:
‖Ŵλ −W (k)λ ‖F = ‖Sλ((PΩ(Y − µ̂λIS) + P ⊥ Ω (WλB ′))B) (A.8)
− Sλ((PΩ(Y − µ̂(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′))B)‖2F ≤ |Ω ∩ ΩS |(µ̂λ − µ(k−1)λ ) 2 + ‖P⊥Ω ((Ŵλ −W (k−1) λ )B ′)‖2F , (A.9)
here (A.8) uses Lemma 5 and (A.9) uses Lemma 2. Meanwhile,
|Ω ∩ ΩS |(µ̂λ − µ(k−1)λ ) 2 =
∑ (i,j)∈Ω∩ΩS ((Ŵλ −W (k−1) λ )B ′)2i,j
|Ω ∩ ΩS | ≤ ‖PΩ((Ŵλ −W (k−1)λ )B ′)‖2F .
(A.10)
Combining (A.9) and (A.10), we have
‖Ŵλ −W (k)λ ‖F ≤ ‖Ŵλ −W (k−1) λ ‖F .
Hence the sequence ‖Ŵλ −W (k)λ ‖F is monotonically decreasing and has a limit. But since there exists W (mk)λ converging to Ŵλ, the limit equals 0, which proves W (k) λ → Ŵλ, µ (k) λ → µ̂.
Therefore we have proved the sequence (µ(k)λ ,W (k) λ ) always converges.
Meanwhile, the first part of equation 4.1 and Lemma 5 in Mazumder et al. (2010) guarantees ~0 ∈ ∂W fλ(Ŵλ, µ̂). By taking derivative directly we have 0 = ∂µfλ(Ŵλ, µ̂). Therefore (Ŵλ, µ̂) is a stationary point for fλ(W,µ). Notice that the loss function fλ(W,µ) is a convex function with respect to (W,µ). Thus we have proved that the limit point (Ŵλ, µ̂) minimizes the function fλ(W,µ).
B DATA GENERATION
Let G be the grid of T equidistributed points and let B be the basis of K spline functions evaluated on grid G. We will simulate the N ×K observation matrix Y with three parts
Y = WB + µIS + E ,
where W follows a mixture-Gaussian distribution with low rank structure, IS is the treatment matrix with uniformly distributed starting time and E represents the i.i.d. measurement error. The specific procedures is described below.
1. Generating W given parameters κ ∈ (0, 1), r1, r2 ∈ R, s1, s2 ∈ RK≥0: (a) Sample two K ×K orthogonal matrices V1, V2 via singular-value-decomposing two
random matrix. (b) Sample two unit length K vectors ~γ1, ~γ2 via normalizing i.i.d. normal samples.
(c) Draw vector ~t ∈ RN from i.i.d. Bernoulli(κ) samples. Denote the all one vector by ~1. (d) Draw N ×K matrices U1, U2 from i.i.d. standard normal random variables. (e) Set
W ← ~t · [ r1~γ1 + U1 diag[ √ s1]V1 ] + (~1− ~t) · [ r2~γ2 + U2 diag[ √ s2]V2 ] ,
where diag[s] is the diagonal matrix with diagonal elements s, “·” represents coordinatewise multiplication, and we are recycling ~t,~1− ~t and ri~γi to match the dimension.
2. Generating IS given parameter ptr ∈ (0, 1). (a) For each k = 1, . . . , N , sample Tk uniformly at random from {1, . . . , bT/ptrc}. (b) Set IS ← (1{j ≥ Ti})1≤i≤N,1≤j≤T .
3. Given parameter ∈ R≥0, E is drawn from from i.i.d. Normal(0, 2) samples. 4. Given parameter µ ∈ R, let Y0 ←WB + µIS + E . 5. Given parameter ρ ∈ (0, 1), drawn 0-1 matrix IΩ from i.i.d. Bernoulli(ρ) samples. Let Ω
denote the set of non-zero entries of IΩ, namely, the set of observed data. Set Y ← (Yij)1≤i≤N,1≤j≤T , where Yij = {
(Y0)ij if (IΩ)ij = 1 NA otherwise .
In actual simulation, we fix the auxiliary parameters as follows,
K = 7, T = 51, N = 500,
κ = 0.33, r1 = 1, r2 = 2,
s1 = [1, 0.4, 0.005, 0.1 exp(−3), ..., 0.1 exp(−K + 1)], s2 = [1.3, 0.2, 0.005, 0.1 exp(−3), ..., 0.1 exp(−K + 1)], ptr = 0.8, = 0.5.
The remaining parameters are treatment effect µ and observation rate ρ, which we allow to vary across different trials. | 1. What is the focus of the paper regarding disease progression modeling?
2. What are the strengths and weaknesses of the proposed method compared to prior works, particularly in terms of novelty and performance?
3. How does the reviewer assess the significance of the added parameter in the model?
4. What are the limitations of the method regarding its application to real data? | Review | Review
This paper proposes to incorporate treatment effect in disease progression modeling for individual patients. It views treatment effect as an exogenous variable and represent that with one additional parameter in the model.
There is not enough novelty with the proposed method. It only extends Kidzinski and Hastie (2018) by adding one extra parameter that represents the average additive treatment effect. And this new model does not outperform the previous model in Kidzinski and Hastie (2018) significantly.
The method also does not perform significantly better on real data. Given the real data as a 1-d vector of length 6k, the method learns a 9x9 latent matrix to represent the data. And the method beats, but not significantly, the intuitive weak baseline rMean which uses the mean of the same patient’s previous visits for future prediction. |
ICLR | Title
Modeling treatment events in disease progression
Abstract
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression. However, to our knowledge, most of existing methods for estimating trajectories do not explicitly account for treatment events such as surgery in-between observations, which dramatically decreases their adequacy for clinical practice. In this study, we develop a first machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events. CSI is easy to implement and is guaranteed to converge to the global minimum of the corresponding optimization problem. Experimental results also demonstrate the effectiveness of CSI using both simulated and real datasets.
1 INTRODUCTION
The course of disease progression in individual patients is one of the biggest uncertainties in medical practice. In an ideal world, accurate, continuous assessment of a patient’s condition helps with prevention and treatment. However, many medical tests are either harmful or inconvenient to perform frequently, and practitioners have to infer the development of disease from sparse, noisy observations.
In its simplest form, the problem of modeling disease progressions is to fit the curve of y(t), t ∈ [tmin, tmax] for each patient, given sparse observations y := (ỹ(t1), . . . , ỹ(tn)). Due to the highdimensional nature of longitudinal data, existing results usually restrict solutions to subspace of functions and utilize similarities between patients via enforcing low-rank structures. One popular approach is the mixed effect models, including Gaussian process approaches (Verbeke, 1997; Zeger et al., 1988) and functional principal components (James et al., 2000). While generative models are commonly used and have nice theoretical properties, their result could be sensitive to the underlying distributional assumptions of observed data and hard to adapt to different applications. Another line of research is to pose the problem of disease progression estimation as an optimization problem. Kidzinski and Hastie. Kidziński & Hastie (2018) proposed a framework which formulates the problem as a matrix completion problem and solve it using matrix factorization techniques. This method is distribution-free and flexible to possible extensions.
Meanwhile, both types of solutions model the natural progression of disease using observations of the targeted variables only. They fail to incorporate the existence and effect of human interference: medications, therapies, surgeries, etc. Two patients with similar symptoms initially may have different futures if they choose different treatments. Without that information, predictions can be way-off.
To the best of our knowledge, existing literature talks little about modeling treatment effect on disease progression. In Kidziński & Hastie (2018), authors use concurrent observations of auxillary variables (e.g. oxygen consumption to motor functions) to help estimate the target one, under the assumption that both variables reflect the intrinsic latent feature of the disease and are thus correlated. Treatments of various types, however, rely on human decisions and to some extent, an exogenous variable to the development of disease. Thus they need to modeled differently.
In this work, we propose a model for tracking disease progression that includes the effects of treatments. We introduce the Coordinatewise-Soft-Impute (CSI) algorithm for fitting the model and investigate its theoretical and practical properties. The contribution of our work is threefold: First, we propose a model and an algorithm CSI, to estimate the progression of disease which incorporates the effect of treatment events. The framework is flexible, distribution-free, simple to implement and generalizable. Second, we prove that CSI converges to the global solution regardless of the initialization. Third, we compare the performance of CSI with various other existing methods on both simulated data and a dataset of Gillette Children’s Hospital with patients diagnosed with Cerebral Palsy, and demonstrate the superior performances of CSI.
The rest of the paper is organized as follows. In Section 2 we state the problem and review existing methods. Next, in Section 3 we describe the model and the algorithm. Theoretic properties of the algorithm are derived in Section 4. Finally, in Section 5 and 6 we provides empirical results of CSI on the simulated and the real datesets respectively. We discuss some future directions in Section 7.
2 PROBLEM STATEMENT AND RELATED WORK
Let y(t) be the trajectory of our objective variable, such as the size of tumor, over fixed time range t ∈ [tmin, tmax], and N be the number of patients. For each patient 1 ≤ i ≤ N , we measure its trajectory yi(t) at ni irregularly time points ti = [ti,1, ti,2, ..., ti,ni ]
′ and denote the results as yi = [yi,1, ..., yi,ni ] ′ = [yi(ti,1), ..., yi(ti,ni)] ′. We are primarily interested in estimating the disease progression trajectories {yi(t)}Ni=1 of all N patients, based on observation data {(ti,yi)}Ni=1. To fit a continuous curve based on discrete observations, we restrict our estimations to a finitedimensional space of functions. Let {bi, i ∈ N} be a fixed basis of L2([tmin, tmax]) (e.g. splines, Fourier basis) and b = {bi : 1 ≤ i ≤ K} be first K dimensions of it. The problem of estimating yi(t) can then be reduced to the problem of estimating the coefficients wi = [wi,1, wi,2, · · · , wi,K ]′ such that w′ib(t) is close to yi(t) at time t ∈ ti. Though intuitive, the above method has two main drawbacks. First, when the number of observations per patient is less than or equal to the number of basis functions K, we can perfectly fit any curve without error, leading to overfitting. Moreover, this direct approach ignores the similarities between curves. Different patients may share similar trend of the trajectories which could potentially imporve the prediction. Below we describe two main lines of research improving on this, the mixed-effect model and the matrix completion model.
2.1 LINEAR MIXED-EFFECT MODEL
In mixed-effect models, every trajectory yi(t) is assumed to be composed of two parts: the fixed effect µ(t) = m′b(t) for some m ∈ RK that remains the same among all patients and a random effect wi ∈ RK that differs for each i ∈ {1, . . . , N}. In its simplest form, we assume
wi ∼ N (0,Σ) and yi|wi ∼ N (µi +Biwi, σ2Ini), where Σ is the K × K covariance matrix, σ is the standard deviation and µi = [µ(ti,1), µ(ti,2), · · ·µ(ti,ni)]′, Bi = [b(ti,1),b(ti,2), · · · ,b(ti,ni)]′ are functions µ(t) and b(t) evaluated at the times ti, respectively. Estimations of model parameters µ,Σ can be made via expectation maximization (EM) algorithm (Laird & Ware, 1982). Individual coefficients wi can be estimated using the best unbiased linear predictor (BLUP) (Henderson, 1975).
In linear mixed-effect model, each trajectory is estimated with |wi| = K degrees of freedom, which can still be too complex when observations are sparse. One typical solution is to assume a low-rank structure of the covariance matrix Σ by introducing a contraction mapping A from the functional basis to a low-dimensional latent space. More specifically, one may rewrite the LMM model as
yi|w̃i ∼ N (µi +BiAw̃i, σ2Ini), where A is a K × q matrix with q < K and w̃i ∈ Rq is the new, shorter random effect to be estimated. Methods based on low-rank approximations are widely adopted and applied in practice and different algorithms on fitting the model have been proposed (James et al., 2000; Lawrence, 2004; Schulam & Arora, 2016). In the later sections, we will compare our algorithm with one specific implementation named functional-Principle-Component-Analysis (fPCA) (James et al., 2000), which uses EM algorithm for estimating model parameters and latent variables wi.
2.2 MATRIX COMPLETION MODEL
While the probabilistic approach of mixed-effect models offers many theoretical advantages including convergence rates and inference testing, it is often sensitive to the assumptions on distributions, some of which are hard to verify in practice. To avoid the potential bias of distributional assumptions in mixed-effect models, Kidzinski and Hastie (Kidziński & Hastie, 2018) formulate the problem as a sparse matrix completion problem. We will review this approach in the current section.
To reduce the continuous-time trajectories into matrices, we discretize the time range [tmin, tmax] into T equi-distributed points G = [τ1, . . . , τT ] with τ1 = tmin, τT = tmax and let B = [b(τ1),b(τ2), · · · ,b(τT )]′ ∈ RT×K be the projection of the K-truncated basis b onto grid G. The N ×K observation matrix Y is constructed from the data {(ti,yi)}Ni=1 by rounding the time ti,j of every observation yi(ti,j) to the nearest time grid and regarding all other entries as missing values. Due to sparsity, we assume that no two observation yi(ti,j)’s are mapped to the same entry of Y .
Let Ω denote the set of all observed entries of Y . For any matrix A, let PΩ(A) be the projection of A onto Ω, i.e. PΩ(A) = M where Mi,j = Ai,j for (i, j) ∈ Ω and Mi,j = 0 otherwise. Similarly, we define P⊥Ω (A) = A− PΩ(A) to be the projection on the complement of Ω. Under this setting, the trajectory prediction problem is reduced to the problem of fitting a N ×K matrix W such that WB′ ≈ Y on observed indices Ω. The direct way of estimating W is to solve the optimization problem
arg min W
1 2 ‖PΩ(Y −WB′)‖2F , (2.1)
where ‖ · ‖F is the Fröbenius norm. Again, if K is larger than the number of observations for some subject we will overfit. To avoid this problem we need some additional constraints on W . A typical approach in the matrix completion community is to introduce a nuclear norm penalty—a relaxed version of the rank penalty while preserving convexity (Rennie & Srebro, 2005; Candès & Recht, 2009). The optimization problem with the nuclear norm penalty takes form
arg min W
1 2 ‖PΩ(Y −WB′)‖2F + λ‖W‖∗, (2.2)
where λ > 0 is the regularization parameter, ‖ · ‖F is the Fröbenius norm, and ‖ · ‖∗ is the nuclear norm, i.e. the sum of singular values. In Kidziński & Hastie (2018), a Soft-Longitudinal-Impute (SLI) algorithm is proposed to solve (2.2) efficiently. We refer the readers to Kidziński & Hastie (2018) for detailed description of SLI while noting that it is also a special case of our algorithm 1 defined in the next section with µ fixed to be 0.
3 MODELING TREATMENT IN DISEASE PROGRESSION
In this section, we introduce our model on effect of treatments in disease progression.
A wide variety of treatments with different effects and durations exist in medical practice and it is impossible to build a single model to encompass them all. In this study we take the simplified approach and regard treatment, with the example of one-time surgery in mind, as a non-recurring event with an additive effect on the targeted variable afterward. Due to the flexibility of formulation of optimization problem (2.1), we build our model based on matrix completion framework of Section 2.2.
More specifically, let s(i) ∈ G be the time of treatment of the i’th patient, rounded to the closest τk ∈ G (s(i) =∞ if no treatment is performed). We encode the treatment information as a N × T zero-one matrix IS , where (IS)i,j = 1 if and only τj ≥ s(i), i.e. patient i has already taken the treatment by time τj . Each row of IS takes the form of (0, · · · , 0, 1, · · · , 1). Let µ denote the average additive effect of treatment among all patients. In practice, we have access to the sparse observation matrix Y and surgery matrix IS and aim to estimate the treatment effect µ and individual coefficient matrix W based on Y, IS and the fixed basis matrix B such that WB′ + µIS ≈ Y . Again, to avoid overfitting and exploit the similarities between individuals, we add a penalty term on the nuclear norm of W . The optimization problem is thus expressed as:
arg min µ,W
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗, (3.1)
for some λ > 0.
3.1 COORDINATEWISE-SOFT-IMPUTE (CSI) ALGORITHM
Though the optimization problem (3.1) above does not admit an explicit analytical solution, it is not hard to solve for one of µ or W given the other one. For fixed µ, the problem reduces to the optimization problem (2.2) with Ỹ = Y − µIS and can be solved iteratively by the SLI algorithm Kidziński & Hastie (2018), which we will also specify later in Algorithm 1. For fixed W , we have
arg min µ
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗
= arg min µ
1 2 ‖PΩ(−WB′ − µIS)‖2F = arg min
µ
1
2 ∑ (i,j)∈Ω∩ΩS ((Y −WB′)i,j − µ)2, (3.2)
where ΩS is the set of non-zero indices of IS . Optimization problem (3.2) can be solved by taking derivative with respect to µ directly, which yields
µ̂ =
∑ (i,j)∈Ω∩ΩS (Y −WB ′)i,j
|Ω ∩ ΩS | . (3.3)
The clean formulation of (3.3) motivates us to the following Coordinatewise-Soft-Impute (CSI) algorithm (Algorithm 1): At each iteration, CSI updates Wnew from (Wold, µold) via soft singular value thresholding and then updates µnew from (Wnew, µold) via (3.3), finally it replaces the missing values of Y based (Wnew, µnew). In the definition, we define operator Sλ as for any matrix X , Sλ(X) := UDλV , where X = UDV is the SVD of X and Dλ = diag((max{di − λ, 0})Ki=1) is derived from the diagonal matrix D = diag((di)Ki=1). Note that if we set µ ≡ 0 throughout the updates, then we get back to our base model SLI without treatment effect.
Algorithm 1: COORDINATEWISE-SOFT-IMPUTE 1. Initialize Wold ← all-zero matrix, µold ← 0. 2. Repeat:
(a) Compute Wnew ← Sλ((PΩ(Y − µoldIS) + P⊥Ω (WoldB′))B); (b) Compute µnew ← ∑ (i,j)∈Ω∩ΩS (Y−WnewB′)i,j
|Ω∩ΩS | ; (c) If max {
(µnew−µold)2 µ2old , ‖Wnew−Wold‖2F ‖Wold‖2F
} < ε, exit;
(d) Assign Wold ←Wnew, µold ← µnew. 3. Output Ŵλ ←Wnew, µ̂λ ← µnew.
4 CONVERGENCE ANALYSIS
In this section we study the convergence properties of Algorithm 1. Fix the regularization parameter λ > 0, let (µ(k)λ ,W (k) λ ) be the value of (µ,W ) in the k’th iteration of the algorithm, the exact definition of which is provided below in (4.4). We prove that Algorithm 1 reduces the loss function at each iteration and eventually converges to the global minimizer.
Theorem 1. The sequence (µ(k)λ ,W (k) λ ) converges to a limit point (µ̂λ, Ŵλ) which solves the optimization problem:
(µ̂λ, Ŵλ) = arg min µ,W
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗.
Moreover, (µ̂λ, Ŵλ) satisfies that Ŵλ = Sλ((PΩ(Y − µ̂λIS) + P⊥Ω (ŴλB′))B), µ̂λ = ∑ (i,j)∈Ω∩ΩS (Y − ŴλB ′)i,j
|Ω ∩ ΩS | . (4.1)
The proof of Theorem 1 relies on five technique Lemmas stated below. The detailed proofs of the lemmas and the proof to Theorem 1 are provided in Appendix A. The first two lemmas are on properties of the nuclear norm shrinkage operator Sλ defined in Section 3.1.
Lemma 1. Let W be an N × K matrix and B is an orthogonal T × K matrix of rank K. The solution to the optimization problem minW 12‖Y −WB
′‖2F + λ‖W‖∗ is given by Ŵ = Sλ(Y B) where Sλ(Y B) is defined in Section 3.1.
Lemma 2. Operator Sλ(·) satisfies the following inequality for any two matrices W1, W2 with matching dimensions:
‖Sλ(W1)− Sλ(W2)‖2F ≤ ‖W1 −W2‖2F .
Define
fλ(W,µ) = 1
2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗, (4.2)
Qλ(W |W̃ , µ) = 1
2 ‖PΩ(Y − µIS) + P⊥Ω (W̃B′)−WB′‖2F + λ‖W‖∗. (4.3)
Lemma 1 shows that in the k-th step of Algorithm 1, W (k)λ is the minimizer for function Qλ(·|W (k−1), µ(k)). The next lemma proves the sequence of loss functions fλ(W (k)λ , µ (k) λ ) is monotonically decreasing at each iteration.
Lemma 3. For every fixed λ ≥ 0, the k’th step of the algorithm (µ(k)λ ,W (k) λ ) is given by
W (k) λ = arg minW Qλ(W |W (k−1)λ , µ (k−1) λ ) µ (k) λ =
∑ (i,j)∈Ω∩ΩS (Y −W (k) λ B ′)i,j
|Ω ∩ ΩS | . (4.4)
Then with any starting point (µ(0)λ ,W (0) λ ), the sequence {(µ (k) λ ,W (k) λ )}k satisfies
fλ(W (k) λ , µ (k) λ ) ≤ fλ(W (k) λ , µ (k−1) λ ) ≤ Qλ(W (k) λ |W (k−1) λ , µ (k−1) λ ) ≤ fλ(W (k−1) λ , µ (k−1) λ ).
The next lemma proves that differences (µk − µk−1)2 and ‖W (k)λ −W (k−1) λ ‖2F both converge to 0.
Lemma 4. For any positive integer k, we have ‖W (k+1)λ − W (k) λ ‖2F ≤ ‖W (k) λ − W (k−1) λ ‖2F . Moreover,
µ (k+1) λ − µ (k) λ → 0, W (k+1 λ −W (k) λ → 0 as k →∞.
Finally we show that if the sequence {(µ(k)λ ,W (k) λ )}k, it has to converge to a solution of (4.1).
Lemma 5. Any limit point (µ̂λ, Ŵλ) of sequences {(µ(k)λ ,W (k) λ )}k satisfies (4.1).
5 SIMULATION STUDY
In this section we illustrate properties of our Coordinatewise-Soft-Impute (CSI) algorithm via simulation study. The simulated data are generated from a mixed-effect model with low-rank covariance structure on W : Y = WB + µIS + E , for which the specific construction is deferred to Appendix B. Below we discuss the evaluation methods as well as the results from simulation study.
5.1 METHODS
We compare the Coordinatewise-Soft-Impute (CSI) algorithm specified in Algorithm 1 with the vanilla algorithm SLI (corresponding to µ̂ = 0 in our notation) defined in Kidziński & Hastie (2018) and the fPCA algorithm defined in James et al. (2000) based on mixed-effect model. We train all three algorithms on the same set of basis functions and choose the tuning parameters λ (for CSI and
SLI) and R (for fPCA) using a 5-fold cross-validation. Each model is then re-trained using the whole training set and tested on a held-out test set Ωtest consisting 10% of all data.
The performance is evaluated in two aspects. First, for different combinations of the treatment effect µ and observation density ρ, we train each of the three algorithms on the simulated data set, and compute the relative squared error between the ground truth µ and estimation µ̂., i.e., RSE(µ̂) = (µ̂− µ)2/µ2. Meanwhile, for different algorithms applied to the same data set, we compare the mean square error between observation Y and estimation Ŷ over test set Ωtest, namely,
MSE(Ŷ ) = 1 |Ωtest| ∑
(i,j)∈Ωtest
(Yij − Ŷij)2 = 1
|Ωtest| ‖PΩtest(Y )− PΩtest(Ŷ )‖2F (5.1)
We train our algorithms with all combinations of treatment effect µ ∈ {0, 0.2, 0.4, · · · , 5}, observation rate ρ ∈ {0.1, 0.3, 0.5}, and thresholding parameter λ ∈ {0, 1, · · · , 4} (for CSI or SLI) or rank R ∈ {2, 3, · · · , 6} (for fPCA). For each fixed combination of parameters, we implemented each algorithm 10 times and average the test error.
5.2 RESULTS
The results are presented in Table 1 and Figure 1. From Table 1 and the left plot of Figure 1, we have the following findings:
1. CSI achieves better performance than SLI and fPCA, regardless of the treatment effect µ and observation rate ρ. Meanwhile SLI performs better than fPCA.
2. All three methods give comparable errors for smaller values of µ. In particular, our introduction of treatment effect µ does not over-fit the model in the case of µ = 0.
3. As the treatment effect µ increases, the performance of CSI remains the same whereas the performances of SLI and fPCA deteriorate rapidly. As a result, CSI outperforms SLI and fPCA by a significant margin for large values of µ. For example, when ρ = 0.1, the MSE(Ŷ ) of CSI decreases from 72.3% of SLI and 59.6% of fPCA at µ = 1 to 12.4% of SLI and 5.8% of fPCA at µ = 5.
4. All three algorithms suffer a higher MSE(Ŷ ) with smaller observation rate ρ. The biggest decay comes from SLI with an average 118% increase in test error from ρ = 0.5 to ρ = 0.1. The performances of fPCA and CSI remains comparatively stable among different observation rate with a 6% and 12% increase respectively. This implies that our algorithm is tolerant to low observation rate.
To further investigate CSI’s ability to estimate µ, we plot the relative squared error of µ̂ using CSI with different observation rate in the right plot of Figure 1. As shown in Figure 1, regardless of the choice of observation rate ρ and treatment effect µ, RSE(µ̂) is always smaller than 1% and most of the estimations achieves error less than 0.1%. Therefore we could conclude that, even for sparse matrix Y , the CSI algorithm could still give very accurate estimate of the treatment effect µ.
6 DATA STUDY
In this section, we apply our methods to real dataset on the progression of motor impairment and gait pathology among children with Cerebral Palsy (CP) and evaluate the effect of orthopaedic surgeries.
Cerebral palsy is a group of permanent movement disorders that appear in early childhood. Orthopaedic surgery plays a major role in minimizing gait impairments related to CP (McGinley et al.,
2012). However, it could be hard to correctly evaluate the outcome of a surgery. For example, the seemingly positive outcome of a surgery may actually due to the natural improvement during puberty. Our objective is to single out the effect of surgeries from the natural progression of disease and use that extra piece of information for better predictions.
6.1 DATA AND METHOD
We analyze a data set of Gillette Children’s Hospital patients, visiting the clinic between 1994 and 2014, age ranging between 4 and 19 years, mostly diagnosed with Cerebral Palsy. The data set contains 84 visits of 36 patients without gait disorders and 6066 visits of 2898 patients with gait pathologies. Gait Deviation Index (GDI), one of the most commonly adopted metrics for gait functionalities (Schwartz & Rozumalski, 2008), was measured and recorded at each clinic visit along with other data such as birthday, subtype of CP, date and type of previous surgery and other medical results.
Our main objective is to model individual disease progression quantified as GDI values. Due to insufficiency of data, we model surgeries of different types and multiple surgeries as a single additive effect on GDI measurements following the methodology from Section 3. We test the same three methods CSI, SLI and fPCA as in Section 5, and compare them to two benchmarks—the population mean of all patients (pMean) and the average GDI from previous visits of the same patient (rMean).
All three algorithms was trained on the spline basis of K = 9 dimensions evaluated at a grid of T = 51 points, with regularization parameters λ ∈ {20, 25, ..., 40} for CSI and SLI and rank constraints r ∈ {2, . . . , 6} for fPCA. To ensure sufficient observations for training, we cross validate and test our models on patients with at least 4 visits and use the rest of the data as a common training set. The effective size of 2-fold validation sets and test set are 5% each. We compare the result of each method/combination of parameters using the mean square error of GDI estimations on held-out entries as defined in (5.1).
6.2 RESULTS
We run all five methods on the same training/validation/test set for 40 times and compare the mean and sd of test-errors. The results are presented in Table 2 and Figure 2. Compared with the null model pMean (Column 2 of Table 2), fPCA gives roughly the same order of error; CSI, SLI and rowMean provide better predictions, achieving 62%, 66% and 73% of the test errors respectively. In particular, our algorithm CSI improves the result of vanilla model SLI by 7%, it also provide a stable estimation with the smallest sd across multiple selections of test sets.
We take a closer look at the low-rank decomposition of disease progression curves provided by algorithms. Fix one run of algorithm CSI with λ? = 30, there are 6 non-zero singular value vectors, which we will refer as principal components. We illustrate the top 3 PCs scaled with corresponding singular values in Figure 3a. The first PC recovers the general trend that gait disorder develops
through age 1-10 and partially recovers during puberty. The second and third PC reflects fluctuations during different periods of child growth. By visual inspection, similar trends can be find in the top components of SLI and fPCA as well.
An example of predicted curve from patient ID 5416 is illustrated in Figure 3b , where the blue curve represents the prediction without estimated treatment effect µ̂ = 4.33, green curve the final prediction and red dots actual observations. It can be seen that the additive treatment effect helps to model the sharp difference between the exam before exam (first observation) and later exams.
7 CONCLUSION AND FUTURE WORK
In this paper, we propose a new framework in modeling the effect of treatment events in disease progression and prove a corresponding algorithm CSI. To the best of our knowledge, it’s the first comprehensive model that explicitly incorporates the effect of treatment events. We would also like to mention that, although we focus on the case of disease progression in this paper, our framework is quite general and can be used to analyze data in any disciplines with sparse observations as well as external effects.
There are several potential extensions to our current framework. Firstly, our framework could be extended to more complicated settings. In our model, treatments have been characterized as the binary matrix IS with a single parameter µ. In practice, each individual may take different types of surgeries for one or multiple times. Secondly, the treatment effect may be correlated with the latent variables of disease type, and can be estimated together with the random effect wi. Finally, our framework could be used to evaluate the true effect of a surgery. A natural question is: does surgery really help? CSI provides estimate of the surgery effect µ, it would be interesting to design certain statistical hypothesis testing/casual inference procedure to answer the proposed question.
Though we are convinced that our work will not be the last word in estimating the disease progression, we hope our idea is useful for further research and we hope the readers could help to take it further.
A PROOFS
Proof of Lemma 1. Note that the solution of the optimization problem
min A
1 2 ‖Z −A‖2F + λ‖A‖∗ (A.1)
is given by  = Sλ(Z) (see Cai et al. (2010) for a proof). Therefore it suffices to show the minimizer of the optimization problem (A.1) is the same as the minimizer of the following problem:
min W
1 2 ‖Y B −W‖2F + λ‖W‖∗.
Using the fact that ‖A‖2F = Tr(AA′) and B′B = IK , we have
arg min W
1 2 ‖Y B −W‖2F + λ‖W‖∗ = arg min W 1 2 (Tr(Y BB′Y ′) + Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 (Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗.
On the other hand
arg min W
1 2 ‖Y −WB′‖2F + λ‖W‖∗ = arg min W 1 2 (Tr(Y Y ′) + Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 (Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 ‖Y B −W‖2F + λ‖W‖∗
= Sλ(Y B),
as desired.
Proof of Lemma 2. We refer the readers to the proof in Mazumder et al. (2010, Section 4, Lemma 3).
Proof of Lemma 3. First we argue that µ(k)λ = arg minµ fλ(W (k) λ , µ) and the first inequality immediately follows. We have
arg min µ fλ(W (k) λ , µ) = arg minµ ‖PΩ(Y −W (k)λ B ′ − µIS)‖2F
= arg min µ ∑ (i,j)∈Ω∩ΩS ((Y −W (k)λ B ′)i,j − µ)2.
Taking derivative with respect to µ directly gives µ(k)λ = arg minµ fλ(W (k) λ , µ), as desired.
For the rest two inequalities, notice that
fλ(W (k) λ , µ (k−1) λ ) =
1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k−1)λ IS)‖ 2 F + λ‖W (k) λ ‖∗
≤ 1 2 ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′)−W (k)λ B ′‖2F + λ‖W (k) λ ‖∗ (A.2)
= Qλ(W (k) λ |W (k−1) λ , µ (k−1) λ ) ≤ Qλ(W (k−1)λ |W (k−1) λ , µ (k−1) λ ) (A.3)
= 1 2 ‖PΩ(Y −W (k−1)λ B ′ − µ(k−1)λ IS)‖ 2 F + λ‖W (k−1) λ ‖∗ = fλ(W (k−1) λ , µ (k−1) λ ).
Here the (A.2) holds because we have
1 2 ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′)−W (k)λ B ′‖2F
= 1 2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′) + P⊥Ω (W (k−1) λ B ′ −W (k)λ B ′)‖2F = 1
2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′)‖2F + 1 2 ‖P⊥Ω (W (k−1) λ B ′ −W (k)λ B ′)‖2F
≥1 2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′)‖2F .
(A.3) follows from the fact that W (k)λ = arg minW Qλ(W |W (k−1) λ , µ (k−1) λ ).
Proof of Lemma 4. First we analyze the behavior of {µ(k)λ },
fλ(W (k) λ , µ (k−1) λ )− fλ(W (k) λ , µ (k) λ ) =
1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k−1)λ IS)‖ 2 F
− 1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k)λ IS)‖ 2 F = |S ∩ ΩS |
2 (µ
(k) λ − µ (k−1) λ ) 2.
Meanwhile, the sequence (· · · , fλ(W (k−1)λ , µ (k−1) λ ), fλ(W (k) λ , µ (k−1) λ ), fλ(W (k) λ , µ (k) λ ), · · · ) is decreasing and lower bounded by 0 and therefore converge to a non-negative number, yielding the differences fλ(W (k) λ , µ (k−1) λ )− fλ(W (k) λ , µ (k) λ )→ 0 as k →∞. Hence
µ (k) λ − µ (k−1) λ → 0, (A.4)
as desired.
The sequence {W (k)λ } is slightly more complicated, direct calculation gives
‖W (k)λ −W (k−1) λ ‖ 2 F = ‖Sλ(PΩ(Y − µ (k−1) λ IS) + P ⊥ Ω (W (k−1) λ B ′))
− Sλ(PΩ(Y − µ(k−2)λ IS) + P ⊥ Ω (W (k−2) λ B ′))‖2F ≤ ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′) (A.5) − PΩ(Y − µ(k−2)λ IS)− P ⊥ Ω (W (k−2) λ B
′)‖2F = |Ω ∩ ΩS |(µ(k−1)λ − µ (k−2) λ ) 2 + ‖P⊥Ω (W (k−1) λ B ′ −W (k−2)λ B ′)‖2F , (A.6)
where (A.5) follows from Lemma 2, (A.6) can be derived pairing the 4 terms according to PΩ and P⊥Ω .
By definition of µ(k)λ , we have
|Ω ∩ ΩS |(µ(k−1)λ − µ (k−2) λ ) 2 = 1
|Ω ∩ ΩS | ∑ (i,j)∈Ω∩ΩS (W (k−1) λ B ′ −W (k−2)λ B ′)i,j 2
≤ ‖PΩ(W (k−1)λ B ′ −W (k−2)λ B ′)‖2F , (A.7)
where (A.7) follows from the Cauchy-Schwartz inequality.
Combining (A.6) with (A.7), we get
‖W (k)λ −W (k−1) λ ‖ 2 F ≤ ‖W (k−1) λ B ′ −W (k−2)λ B ′‖2F = ‖W (k−1) λ −W (k−2) λ ‖ 2 F .
Now we are left to prove that the difference sequence {W (k)λ −W (k−1) λ } converges to zero. Combining (A.4) and (A.7) it suffices to prove that ‖P⊥Ω (W (k−1) λ B ′ −W (k−2)λ B′)‖2F → 0. We have
fλ(W (k−1) λ , µ (k−2) λ )−Qλ(W (k−1) λ |W (k−2) λ , µ (k−2) λ ) = −‖P ⊥ Ω (W (k−1) λ B ′ −W (k−2)λ B ′)‖2F ,
and the left hand side converges to 0 because
0 ≥ fλ(W (k−1)λ , µ (k−2) λ )−Qλ(W (k−1) λ |W (k−2) λ , µ (k−2) λ )
≥ fλ(W (k−2)λ , µ (k−2) λ )− fλ(W (k−1) λ , µ (k−2) λ )→ 0,
which completes the proof.
Proof of Lemma 5. Let (µ(mk)λ ,W (mk) λ )→ (µ̂λ, Ŵλ), then Lemma 4 gives (µ (mk−1) λ ,W (mk−1) λ )→ (µ̂λ, Ŵλ). Since we have
W (mk) λ = Sλ((PΩ(Y − µ (mk−1) λ IS) + P ⊥ Ω (W (mk−1) λ B ′))B),
µ (mk) λ =
∑ (i,j)∈Ω∩ΩS (Y −W (mk−1) λ B ′)i,j
|Ω ∩ ΩS | .
Taking limits on both sides gives us the desire result.
Proof of Theorem 1. Let (µ̂λ, Ŵλ) be one limit point then we have:
‖Ŵλ −W (k)λ ‖F = ‖Sλ((PΩ(Y − µ̂λIS) + P ⊥ Ω (WλB ′))B) (A.8)
− Sλ((PΩ(Y − µ̂(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′))B)‖2F ≤ |Ω ∩ ΩS |(µ̂λ − µ(k−1)λ ) 2 + ‖P⊥Ω ((Ŵλ −W (k−1) λ )B ′)‖2F , (A.9)
here (A.8) uses Lemma 5 and (A.9) uses Lemma 2. Meanwhile,
|Ω ∩ ΩS |(µ̂λ − µ(k−1)λ ) 2 =
∑ (i,j)∈Ω∩ΩS ((Ŵλ −W (k−1) λ )B ′)2i,j
|Ω ∩ ΩS | ≤ ‖PΩ((Ŵλ −W (k−1)λ )B ′)‖2F .
(A.10)
Combining (A.9) and (A.10), we have
‖Ŵλ −W (k)λ ‖F ≤ ‖Ŵλ −W (k−1) λ ‖F .
Hence the sequence ‖Ŵλ −W (k)λ ‖F is monotonically decreasing and has a limit. But since there exists W (mk)λ converging to Ŵλ, the limit equals 0, which proves W (k) λ → Ŵλ, µ (k) λ → µ̂.
Therefore we have proved the sequence (µ(k)λ ,W (k) λ ) always converges.
Meanwhile, the first part of equation 4.1 and Lemma 5 in Mazumder et al. (2010) guarantees ~0 ∈ ∂W fλ(Ŵλ, µ̂). By taking derivative directly we have 0 = ∂µfλ(Ŵλ, µ̂). Therefore (Ŵλ, µ̂) is a stationary point for fλ(W,µ). Notice that the loss function fλ(W,µ) is a convex function with respect to (W,µ). Thus we have proved that the limit point (Ŵλ, µ̂) minimizes the function fλ(W,µ).
B DATA GENERATION
Let G be the grid of T equidistributed points and let B be the basis of K spline functions evaluated on grid G. We will simulate the N ×K observation matrix Y with three parts
Y = WB + µIS + E ,
where W follows a mixture-Gaussian distribution with low rank structure, IS is the treatment matrix with uniformly distributed starting time and E represents the i.i.d. measurement error. The specific procedures is described below.
1. Generating W given parameters κ ∈ (0, 1), r1, r2 ∈ R, s1, s2 ∈ RK≥0: (a) Sample two K ×K orthogonal matrices V1, V2 via singular-value-decomposing two
random matrix. (b) Sample two unit length K vectors ~γ1, ~γ2 via normalizing i.i.d. normal samples.
(c) Draw vector ~t ∈ RN from i.i.d. Bernoulli(κ) samples. Denote the all one vector by ~1. (d) Draw N ×K matrices U1, U2 from i.i.d. standard normal random variables. (e) Set
W ← ~t · [ r1~γ1 + U1 diag[ √ s1]V1 ] + (~1− ~t) · [ r2~γ2 + U2 diag[ √ s2]V2 ] ,
where diag[s] is the diagonal matrix with diagonal elements s, “·” represents coordinatewise multiplication, and we are recycling ~t,~1− ~t and ri~γi to match the dimension.
2. Generating IS given parameter ptr ∈ (0, 1). (a) For each k = 1, . . . , N , sample Tk uniformly at random from {1, . . . , bT/ptrc}. (b) Set IS ← (1{j ≥ Ti})1≤i≤N,1≤j≤T .
3. Given parameter ∈ R≥0, E is drawn from from i.i.d. Normal(0, 2) samples. 4. Given parameter µ ∈ R, let Y0 ←WB + µIS + E . 5. Given parameter ρ ∈ (0, 1), drawn 0-1 matrix IΩ from i.i.d. Bernoulli(ρ) samples. Let Ω
denote the set of non-zero entries of IΩ, namely, the set of observed data. Set Y ← (Yij)1≤i≤N,1≤j≤T , where Yij = {
(Y0)ij if (IΩ)ij = 1 NA otherwise .
In actual simulation, we fix the auxiliary parameters as follows,
K = 7, T = 51, N = 500,
κ = 0.33, r1 = 1, r2 = 2,
s1 = [1, 0.4, 0.005, 0.1 exp(−3), ..., 0.1 exp(−K + 1)], s2 = [1.3, 0.2, 0.005, 0.1 exp(−3), ..., 0.1 exp(−K + 1)], ptr = 0.8, = 0.5.
The remaining parameters are treatment effect µ and observation rate ρ, which we allow to vary across different trials. | 1. What are the main contributions and novelties of the proposed Coordinatewise-Soft-Impute (CSI) algorithm?
2. How does the reviewer assess the significance of the existing literature on modeling treatment effects on disease progression?
3. What are the weaknesses of the proposed method compared to prior works, particularly in terms of novelty and fairness in comparisons?
4. How does the reviewer evaluate the citations and formatting in the paper?
5. Are there any specific suggestions for improving the paper, such as providing a description or readme file for running experiments? | Review | Review
The paper introduces Coordinatewise-Soft-Impute (CSI) algorithm that attempts to estimate the progression of a disease while taking into account the effect of a single binary treatment intervention. The authors also provide prove that CSI converges to the global solution regardless of the initialization. The paper is concluded with experimental results on synthetic and real-world data, comparing the proposed method with contender methods functional Principal Component Analysis (fPCA) and Soft-Longitudinal-Impute (SLI).
This paper should be rejected due to the following arguments:
- The authors claim that the “existing literature talks little about modeling treatment effect on disease progression.” However, this is not true. Here are a few works and the references within, for instance:
Xu, Y., Xu, Y., & Saria, S. (2016, December). A Bayesian nonparametric approach for estimating individualized treatment-response curves. In Machine Learning for Healthcare Conference (pp. 282-300).
Lim, B. (2018). Forecasting treatment responses over time using recurrent marginal structural networks. In Advances in Neural Information Processing Systems (pp. 7483-7493).
- The proposed method is simple extension of that of (Kidzinski & Hastie, 2018), by adding a term that accounts for treatment and its timing. It does not present enough novelty.
- CSI is empirically compared with fPCA and SLI, however, neither of these two methods take into account the effect of applying treatment. This comparison is unfair and therefore, cannot justify the superiority of CSI.
Things to improve the paper that did not impact the score:
- Citations throughout the paper had wrong formatting. Often \citep should have been used instead of \citet.
- Page 1, last par., last sentence: Thus, they need to “be” modeled differently.
- Page 2, last par., line -2: Principle >> Principal
- Page 3, par. 2, line 4: the observation matrix Y is N \times T (not N \times K).
- Page 4, line 2 of Eq. (3.2): first argument misses Y.
- Code was included but no description or readme file was provided for how to run the experiments. |
ICLR | Title
Modeling treatment events in disease progression
Abstract
Ability to quantify and predict progression of a disease is fundamental for selecting an appropriate treatment. Many clinical metrics cannot be acquired frequently either because of their cost (e.g. MRI, gait analysis) or because they are inconvenient or harmful to a patient (e.g. biopsy, x-ray). In such scenarios, in order to estimate individual trajectories of disease progression, it is advantageous to leverage similarities between patients, i.e. the covariance of trajectories, and find a latent representation of progression. However, to our knowledge, most of existing methods for estimating trajectories do not explicitly account for treatment events such as surgery in-between observations, which dramatically decreases their adequacy for clinical practice. In this study, we develop a first machine learning framework named Coordinatewise-Soft-Impute (CSI) for analyzing disease progression from sparse observations in the presence of confounding events. CSI is easy to implement and is guaranteed to converge to the global minimum of the corresponding optimization problem. Experimental results also demonstrate the effectiveness of CSI using both simulated and real datasets.
1 INTRODUCTION
The course of disease progression in individual patients is one of the biggest uncertainties in medical practice. In an ideal world, accurate, continuous assessment of a patient’s condition helps with prevention and treatment. However, many medical tests are either harmful or inconvenient to perform frequently, and practitioners have to infer the development of disease from sparse, noisy observations.
In its simplest form, the problem of modeling disease progressions is to fit the curve of y(t), t ∈ [tmin, tmax] for each patient, given sparse observations y := (ỹ(t1), . . . , ỹ(tn)). Due to the highdimensional nature of longitudinal data, existing results usually restrict solutions to subspace of functions and utilize similarities between patients via enforcing low-rank structures. One popular approach is the mixed effect models, including Gaussian process approaches (Verbeke, 1997; Zeger et al., 1988) and functional principal components (James et al., 2000). While generative models are commonly used and have nice theoretical properties, their result could be sensitive to the underlying distributional assumptions of observed data and hard to adapt to different applications. Another line of research is to pose the problem of disease progression estimation as an optimization problem. Kidzinski and Hastie. Kidziński & Hastie (2018) proposed a framework which formulates the problem as a matrix completion problem and solve it using matrix factorization techniques. This method is distribution-free and flexible to possible extensions.
Meanwhile, both types of solutions model the natural progression of disease using observations of the targeted variables only. They fail to incorporate the existence and effect of human interference: medications, therapies, surgeries, etc. Two patients with similar symptoms initially may have different futures if they choose different treatments. Without that information, predictions can be way-off.
To the best of our knowledge, existing literature talks little about modeling treatment effect on disease progression. In Kidziński & Hastie (2018), authors use concurrent observations of auxillary variables (e.g. oxygen consumption to motor functions) to help estimate the target one, under the assumption that both variables reflect the intrinsic latent feature of the disease and are thus correlated. Treatments of various types, however, rely on human decisions and to some extent, an exogenous variable to the development of disease. Thus they need to modeled differently.
In this work, we propose a model for tracking disease progression that includes the effects of treatments. We introduce the Coordinatewise-Soft-Impute (CSI) algorithm for fitting the model and investigate its theoretical and practical properties. The contribution of our work is threefold: First, we propose a model and an algorithm CSI, to estimate the progression of disease which incorporates the effect of treatment events. The framework is flexible, distribution-free, simple to implement and generalizable. Second, we prove that CSI converges to the global solution regardless of the initialization. Third, we compare the performance of CSI with various other existing methods on both simulated data and a dataset of Gillette Children’s Hospital with patients diagnosed with Cerebral Palsy, and demonstrate the superior performances of CSI.
The rest of the paper is organized as follows. In Section 2 we state the problem and review existing methods. Next, in Section 3 we describe the model and the algorithm. Theoretic properties of the algorithm are derived in Section 4. Finally, in Section 5 and 6 we provides empirical results of CSI on the simulated and the real datesets respectively. We discuss some future directions in Section 7.
2 PROBLEM STATEMENT AND RELATED WORK
Let y(t) be the trajectory of our objective variable, such as the size of tumor, over fixed time range t ∈ [tmin, tmax], and N be the number of patients. For each patient 1 ≤ i ≤ N , we measure its trajectory yi(t) at ni irregularly time points ti = [ti,1, ti,2, ..., ti,ni ]
′ and denote the results as yi = [yi,1, ..., yi,ni ] ′ = [yi(ti,1), ..., yi(ti,ni)] ′. We are primarily interested in estimating the disease progression trajectories {yi(t)}Ni=1 of all N patients, based on observation data {(ti,yi)}Ni=1. To fit a continuous curve based on discrete observations, we restrict our estimations to a finitedimensional space of functions. Let {bi, i ∈ N} be a fixed basis of L2([tmin, tmax]) (e.g. splines, Fourier basis) and b = {bi : 1 ≤ i ≤ K} be first K dimensions of it. The problem of estimating yi(t) can then be reduced to the problem of estimating the coefficients wi = [wi,1, wi,2, · · · , wi,K ]′ such that w′ib(t) is close to yi(t) at time t ∈ ti. Though intuitive, the above method has two main drawbacks. First, when the number of observations per patient is less than or equal to the number of basis functions K, we can perfectly fit any curve without error, leading to overfitting. Moreover, this direct approach ignores the similarities between curves. Different patients may share similar trend of the trajectories which could potentially imporve the prediction. Below we describe two main lines of research improving on this, the mixed-effect model and the matrix completion model.
2.1 LINEAR MIXED-EFFECT MODEL
In mixed-effect models, every trajectory yi(t) is assumed to be composed of two parts: the fixed effect µ(t) = m′b(t) for some m ∈ RK that remains the same among all patients and a random effect wi ∈ RK that differs for each i ∈ {1, . . . , N}. In its simplest form, we assume
wi ∼ N (0,Σ) and yi|wi ∼ N (µi +Biwi, σ2Ini), where Σ is the K × K covariance matrix, σ is the standard deviation and µi = [µ(ti,1), µ(ti,2), · · ·µ(ti,ni)]′, Bi = [b(ti,1),b(ti,2), · · · ,b(ti,ni)]′ are functions µ(t) and b(t) evaluated at the times ti, respectively. Estimations of model parameters µ,Σ can be made via expectation maximization (EM) algorithm (Laird & Ware, 1982). Individual coefficients wi can be estimated using the best unbiased linear predictor (BLUP) (Henderson, 1975).
In linear mixed-effect model, each trajectory is estimated with |wi| = K degrees of freedom, which can still be too complex when observations are sparse. One typical solution is to assume a low-rank structure of the covariance matrix Σ by introducing a contraction mapping A from the functional basis to a low-dimensional latent space. More specifically, one may rewrite the LMM model as
yi|w̃i ∼ N (µi +BiAw̃i, σ2Ini), where A is a K × q matrix with q < K and w̃i ∈ Rq is the new, shorter random effect to be estimated. Methods based on low-rank approximations are widely adopted and applied in practice and different algorithms on fitting the model have been proposed (James et al., 2000; Lawrence, 2004; Schulam & Arora, 2016). In the later sections, we will compare our algorithm with one specific implementation named functional-Principle-Component-Analysis (fPCA) (James et al., 2000), which uses EM algorithm for estimating model parameters and latent variables wi.
2.2 MATRIX COMPLETION MODEL
While the probabilistic approach of mixed-effect models offers many theoretical advantages including convergence rates and inference testing, it is often sensitive to the assumptions on distributions, some of which are hard to verify in practice. To avoid the potential bias of distributional assumptions in mixed-effect models, Kidzinski and Hastie (Kidziński & Hastie, 2018) formulate the problem as a sparse matrix completion problem. We will review this approach in the current section.
To reduce the continuous-time trajectories into matrices, we discretize the time range [tmin, tmax] into T equi-distributed points G = [τ1, . . . , τT ] with τ1 = tmin, τT = tmax and let B = [b(τ1),b(τ2), · · · ,b(τT )]′ ∈ RT×K be the projection of the K-truncated basis b onto grid G. The N ×K observation matrix Y is constructed from the data {(ti,yi)}Ni=1 by rounding the time ti,j of every observation yi(ti,j) to the nearest time grid and regarding all other entries as missing values. Due to sparsity, we assume that no two observation yi(ti,j)’s are mapped to the same entry of Y .
Let Ω denote the set of all observed entries of Y . For any matrix A, let PΩ(A) be the projection of A onto Ω, i.e. PΩ(A) = M where Mi,j = Ai,j for (i, j) ∈ Ω and Mi,j = 0 otherwise. Similarly, we define P⊥Ω (A) = A− PΩ(A) to be the projection on the complement of Ω. Under this setting, the trajectory prediction problem is reduced to the problem of fitting a N ×K matrix W such that WB′ ≈ Y on observed indices Ω. The direct way of estimating W is to solve the optimization problem
arg min W
1 2 ‖PΩ(Y −WB′)‖2F , (2.1)
where ‖ · ‖F is the Fröbenius norm. Again, if K is larger than the number of observations for some subject we will overfit. To avoid this problem we need some additional constraints on W . A typical approach in the matrix completion community is to introduce a nuclear norm penalty—a relaxed version of the rank penalty while preserving convexity (Rennie & Srebro, 2005; Candès & Recht, 2009). The optimization problem with the nuclear norm penalty takes form
arg min W
1 2 ‖PΩ(Y −WB′)‖2F + λ‖W‖∗, (2.2)
where λ > 0 is the regularization parameter, ‖ · ‖F is the Fröbenius norm, and ‖ · ‖∗ is the nuclear norm, i.e. the sum of singular values. In Kidziński & Hastie (2018), a Soft-Longitudinal-Impute (SLI) algorithm is proposed to solve (2.2) efficiently. We refer the readers to Kidziński & Hastie (2018) for detailed description of SLI while noting that it is also a special case of our algorithm 1 defined in the next section with µ fixed to be 0.
3 MODELING TREATMENT IN DISEASE PROGRESSION
In this section, we introduce our model on effect of treatments in disease progression.
A wide variety of treatments with different effects and durations exist in medical practice and it is impossible to build a single model to encompass them all. In this study we take the simplified approach and regard treatment, with the example of one-time surgery in mind, as a non-recurring event with an additive effect on the targeted variable afterward. Due to the flexibility of formulation of optimization problem (2.1), we build our model based on matrix completion framework of Section 2.2.
More specifically, let s(i) ∈ G be the time of treatment of the i’th patient, rounded to the closest τk ∈ G (s(i) =∞ if no treatment is performed). We encode the treatment information as a N × T zero-one matrix IS , where (IS)i,j = 1 if and only τj ≥ s(i), i.e. patient i has already taken the treatment by time τj . Each row of IS takes the form of (0, · · · , 0, 1, · · · , 1). Let µ denote the average additive effect of treatment among all patients. In practice, we have access to the sparse observation matrix Y and surgery matrix IS and aim to estimate the treatment effect µ and individual coefficient matrix W based on Y, IS and the fixed basis matrix B such that WB′ + µIS ≈ Y . Again, to avoid overfitting and exploit the similarities between individuals, we add a penalty term on the nuclear norm of W . The optimization problem is thus expressed as:
arg min µ,W
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗, (3.1)
for some λ > 0.
3.1 COORDINATEWISE-SOFT-IMPUTE (CSI) ALGORITHM
Though the optimization problem (3.1) above does not admit an explicit analytical solution, it is not hard to solve for one of µ or W given the other one. For fixed µ, the problem reduces to the optimization problem (2.2) with Ỹ = Y − µIS and can be solved iteratively by the SLI algorithm Kidziński & Hastie (2018), which we will also specify later in Algorithm 1. For fixed W , we have
arg min µ
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗
= arg min µ
1 2 ‖PΩ(−WB′ − µIS)‖2F = arg min
µ
1
2 ∑ (i,j)∈Ω∩ΩS ((Y −WB′)i,j − µ)2, (3.2)
where ΩS is the set of non-zero indices of IS . Optimization problem (3.2) can be solved by taking derivative with respect to µ directly, which yields
µ̂ =
∑ (i,j)∈Ω∩ΩS (Y −WB ′)i,j
|Ω ∩ ΩS | . (3.3)
The clean formulation of (3.3) motivates us to the following Coordinatewise-Soft-Impute (CSI) algorithm (Algorithm 1): At each iteration, CSI updates Wnew from (Wold, µold) via soft singular value thresholding and then updates µnew from (Wnew, µold) via (3.3), finally it replaces the missing values of Y based (Wnew, µnew). In the definition, we define operator Sλ as for any matrix X , Sλ(X) := UDλV , where X = UDV is the SVD of X and Dλ = diag((max{di − λ, 0})Ki=1) is derived from the diagonal matrix D = diag((di)Ki=1). Note that if we set µ ≡ 0 throughout the updates, then we get back to our base model SLI without treatment effect.
Algorithm 1: COORDINATEWISE-SOFT-IMPUTE 1. Initialize Wold ← all-zero matrix, µold ← 0. 2. Repeat:
(a) Compute Wnew ← Sλ((PΩ(Y − µoldIS) + P⊥Ω (WoldB′))B); (b) Compute µnew ← ∑ (i,j)∈Ω∩ΩS (Y−WnewB′)i,j
|Ω∩ΩS | ; (c) If max {
(µnew−µold)2 µ2old , ‖Wnew−Wold‖2F ‖Wold‖2F
} < ε, exit;
(d) Assign Wold ←Wnew, µold ← µnew. 3. Output Ŵλ ←Wnew, µ̂λ ← µnew.
4 CONVERGENCE ANALYSIS
In this section we study the convergence properties of Algorithm 1. Fix the regularization parameter λ > 0, let (µ(k)λ ,W (k) λ ) be the value of (µ,W ) in the k’th iteration of the algorithm, the exact definition of which is provided below in (4.4). We prove that Algorithm 1 reduces the loss function at each iteration and eventually converges to the global minimizer.
Theorem 1. The sequence (µ(k)λ ,W (k) λ ) converges to a limit point (µ̂λ, Ŵλ) which solves the optimization problem:
(µ̂λ, Ŵλ) = arg min µ,W
1 2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗.
Moreover, (µ̂λ, Ŵλ) satisfies that Ŵλ = Sλ((PΩ(Y − µ̂λIS) + P⊥Ω (ŴλB′))B), µ̂λ = ∑ (i,j)∈Ω∩ΩS (Y − ŴλB ′)i,j
|Ω ∩ ΩS | . (4.1)
The proof of Theorem 1 relies on five technique Lemmas stated below. The detailed proofs of the lemmas and the proof to Theorem 1 are provided in Appendix A. The first two lemmas are on properties of the nuclear norm shrinkage operator Sλ defined in Section 3.1.
Lemma 1. Let W be an N × K matrix and B is an orthogonal T × K matrix of rank K. The solution to the optimization problem minW 12‖Y −WB
′‖2F + λ‖W‖∗ is given by Ŵ = Sλ(Y B) where Sλ(Y B) is defined in Section 3.1.
Lemma 2. Operator Sλ(·) satisfies the following inequality for any two matrices W1, W2 with matching dimensions:
‖Sλ(W1)− Sλ(W2)‖2F ≤ ‖W1 −W2‖2F .
Define
fλ(W,µ) = 1
2 ‖PΩ(Y −WB′ − µIS)‖2F + λ‖W‖∗, (4.2)
Qλ(W |W̃ , µ) = 1
2 ‖PΩ(Y − µIS) + P⊥Ω (W̃B′)−WB′‖2F + λ‖W‖∗. (4.3)
Lemma 1 shows that in the k-th step of Algorithm 1, W (k)λ is the minimizer for function Qλ(·|W (k−1), µ(k)). The next lemma proves the sequence of loss functions fλ(W (k)λ , µ (k) λ ) is monotonically decreasing at each iteration.
Lemma 3. For every fixed λ ≥ 0, the k’th step of the algorithm (µ(k)λ ,W (k) λ ) is given by
W (k) λ = arg minW Qλ(W |W (k−1)λ , µ (k−1) λ ) µ (k) λ =
∑ (i,j)∈Ω∩ΩS (Y −W (k) λ B ′)i,j
|Ω ∩ ΩS | . (4.4)
Then with any starting point (µ(0)λ ,W (0) λ ), the sequence {(µ (k) λ ,W (k) λ )}k satisfies
fλ(W (k) λ , µ (k) λ ) ≤ fλ(W (k) λ , µ (k−1) λ ) ≤ Qλ(W (k) λ |W (k−1) λ , µ (k−1) λ ) ≤ fλ(W (k−1) λ , µ (k−1) λ ).
The next lemma proves that differences (µk − µk−1)2 and ‖W (k)λ −W (k−1) λ ‖2F both converge to 0.
Lemma 4. For any positive integer k, we have ‖W (k+1)λ − W (k) λ ‖2F ≤ ‖W (k) λ − W (k−1) λ ‖2F . Moreover,
µ (k+1) λ − µ (k) λ → 0, W (k+1 λ −W (k) λ → 0 as k →∞.
Finally we show that if the sequence {(µ(k)λ ,W (k) λ )}k, it has to converge to a solution of (4.1).
Lemma 5. Any limit point (µ̂λ, Ŵλ) of sequences {(µ(k)λ ,W (k) λ )}k satisfies (4.1).
5 SIMULATION STUDY
In this section we illustrate properties of our Coordinatewise-Soft-Impute (CSI) algorithm via simulation study. The simulated data are generated from a mixed-effect model with low-rank covariance structure on W : Y = WB + µIS + E , for which the specific construction is deferred to Appendix B. Below we discuss the evaluation methods as well as the results from simulation study.
5.1 METHODS
We compare the Coordinatewise-Soft-Impute (CSI) algorithm specified in Algorithm 1 with the vanilla algorithm SLI (corresponding to µ̂ = 0 in our notation) defined in Kidziński & Hastie (2018) and the fPCA algorithm defined in James et al. (2000) based on mixed-effect model. We train all three algorithms on the same set of basis functions and choose the tuning parameters λ (for CSI and
SLI) and R (for fPCA) using a 5-fold cross-validation. Each model is then re-trained using the whole training set and tested on a held-out test set Ωtest consisting 10% of all data.
The performance is evaluated in two aspects. First, for different combinations of the treatment effect µ and observation density ρ, we train each of the three algorithms on the simulated data set, and compute the relative squared error between the ground truth µ and estimation µ̂., i.e., RSE(µ̂) = (µ̂− µ)2/µ2. Meanwhile, for different algorithms applied to the same data set, we compare the mean square error between observation Y and estimation Ŷ over test set Ωtest, namely,
MSE(Ŷ ) = 1 |Ωtest| ∑
(i,j)∈Ωtest
(Yij − Ŷij)2 = 1
|Ωtest| ‖PΩtest(Y )− PΩtest(Ŷ )‖2F (5.1)
We train our algorithms with all combinations of treatment effect µ ∈ {0, 0.2, 0.4, · · · , 5}, observation rate ρ ∈ {0.1, 0.3, 0.5}, and thresholding parameter λ ∈ {0, 1, · · · , 4} (for CSI or SLI) or rank R ∈ {2, 3, · · · , 6} (for fPCA). For each fixed combination of parameters, we implemented each algorithm 10 times and average the test error.
5.2 RESULTS
The results are presented in Table 1 and Figure 1. From Table 1 and the left plot of Figure 1, we have the following findings:
1. CSI achieves better performance than SLI and fPCA, regardless of the treatment effect µ and observation rate ρ. Meanwhile SLI performs better than fPCA.
2. All three methods give comparable errors for smaller values of µ. In particular, our introduction of treatment effect µ does not over-fit the model in the case of µ = 0.
3. As the treatment effect µ increases, the performance of CSI remains the same whereas the performances of SLI and fPCA deteriorate rapidly. As a result, CSI outperforms SLI and fPCA by a significant margin for large values of µ. For example, when ρ = 0.1, the MSE(Ŷ ) of CSI decreases from 72.3% of SLI and 59.6% of fPCA at µ = 1 to 12.4% of SLI and 5.8% of fPCA at µ = 5.
4. All three algorithms suffer a higher MSE(Ŷ ) with smaller observation rate ρ. The biggest decay comes from SLI with an average 118% increase in test error from ρ = 0.5 to ρ = 0.1. The performances of fPCA and CSI remains comparatively stable among different observation rate with a 6% and 12% increase respectively. This implies that our algorithm is tolerant to low observation rate.
To further investigate CSI’s ability to estimate µ, we plot the relative squared error of µ̂ using CSI with different observation rate in the right plot of Figure 1. As shown in Figure 1, regardless of the choice of observation rate ρ and treatment effect µ, RSE(µ̂) is always smaller than 1% and most of the estimations achieves error less than 0.1%. Therefore we could conclude that, even for sparse matrix Y , the CSI algorithm could still give very accurate estimate of the treatment effect µ.
6 DATA STUDY
In this section, we apply our methods to real dataset on the progression of motor impairment and gait pathology among children with Cerebral Palsy (CP) and evaluate the effect of orthopaedic surgeries.
Cerebral palsy is a group of permanent movement disorders that appear in early childhood. Orthopaedic surgery plays a major role in minimizing gait impairments related to CP (McGinley et al.,
2012). However, it could be hard to correctly evaluate the outcome of a surgery. For example, the seemingly positive outcome of a surgery may actually due to the natural improvement during puberty. Our objective is to single out the effect of surgeries from the natural progression of disease and use that extra piece of information for better predictions.
6.1 DATA AND METHOD
We analyze a data set of Gillette Children’s Hospital patients, visiting the clinic between 1994 and 2014, age ranging between 4 and 19 years, mostly diagnosed with Cerebral Palsy. The data set contains 84 visits of 36 patients without gait disorders and 6066 visits of 2898 patients with gait pathologies. Gait Deviation Index (GDI), one of the most commonly adopted metrics for gait functionalities (Schwartz & Rozumalski, 2008), was measured and recorded at each clinic visit along with other data such as birthday, subtype of CP, date and type of previous surgery and other medical results.
Our main objective is to model individual disease progression quantified as GDI values. Due to insufficiency of data, we model surgeries of different types and multiple surgeries as a single additive effect on GDI measurements following the methodology from Section 3. We test the same three methods CSI, SLI and fPCA as in Section 5, and compare them to two benchmarks—the population mean of all patients (pMean) and the average GDI from previous visits of the same patient (rMean).
All three algorithms was trained on the spline basis of K = 9 dimensions evaluated at a grid of T = 51 points, with regularization parameters λ ∈ {20, 25, ..., 40} for CSI and SLI and rank constraints r ∈ {2, . . . , 6} for fPCA. To ensure sufficient observations for training, we cross validate and test our models on patients with at least 4 visits and use the rest of the data as a common training set. The effective size of 2-fold validation sets and test set are 5% each. We compare the result of each method/combination of parameters using the mean square error of GDI estimations on held-out entries as defined in (5.1).
6.2 RESULTS
We run all five methods on the same training/validation/test set for 40 times and compare the mean and sd of test-errors. The results are presented in Table 2 and Figure 2. Compared with the null model pMean (Column 2 of Table 2), fPCA gives roughly the same order of error; CSI, SLI and rowMean provide better predictions, achieving 62%, 66% and 73% of the test errors respectively. In particular, our algorithm CSI improves the result of vanilla model SLI by 7%, it also provide a stable estimation with the smallest sd across multiple selections of test sets.
We take a closer look at the low-rank decomposition of disease progression curves provided by algorithms. Fix one run of algorithm CSI with λ? = 30, there are 6 non-zero singular value vectors, which we will refer as principal components. We illustrate the top 3 PCs scaled with corresponding singular values in Figure 3a. The first PC recovers the general trend that gait disorder develops
through age 1-10 and partially recovers during puberty. The second and third PC reflects fluctuations during different periods of child growth. By visual inspection, similar trends can be find in the top components of SLI and fPCA as well.
An example of predicted curve from patient ID 5416 is illustrated in Figure 3b , where the blue curve represents the prediction without estimated treatment effect µ̂ = 4.33, green curve the final prediction and red dots actual observations. It can be seen that the additive treatment effect helps to model the sharp difference between the exam before exam (first observation) and later exams.
7 CONCLUSION AND FUTURE WORK
In this paper, we propose a new framework in modeling the effect of treatment events in disease progression and prove a corresponding algorithm CSI. To the best of our knowledge, it’s the first comprehensive model that explicitly incorporates the effect of treatment events. We would also like to mention that, although we focus on the case of disease progression in this paper, our framework is quite general and can be used to analyze data in any disciplines with sparse observations as well as external effects.
There are several potential extensions to our current framework. Firstly, our framework could be extended to more complicated settings. In our model, treatments have been characterized as the binary matrix IS with a single parameter µ. In practice, each individual may take different types of surgeries for one or multiple times. Secondly, the treatment effect may be correlated with the latent variables of disease type, and can be estimated together with the random effect wi. Finally, our framework could be used to evaluate the true effect of a surgery. A natural question is: does surgery really help? CSI provides estimate of the surgery effect µ, it would be interesting to design certain statistical hypothesis testing/casual inference procedure to answer the proposed question.
Though we are convinced that our work will not be the last word in estimating the disease progression, we hope our idea is useful for further research and we hope the readers could help to take it further.
A PROOFS
Proof of Lemma 1. Note that the solution of the optimization problem
min A
1 2 ‖Z −A‖2F + λ‖A‖∗ (A.1)
is given by  = Sλ(Z) (see Cai et al. (2010) for a proof). Therefore it suffices to show the minimizer of the optimization problem (A.1) is the same as the minimizer of the following problem:
min W
1 2 ‖Y B −W‖2F + λ‖W‖∗.
Using the fact that ‖A‖2F = Tr(AA′) and B′B = IK , we have
arg min W
1 2 ‖Y B −W‖2F + λ‖W‖∗ = arg min W 1 2 (Tr(Y BB′Y ′) + Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 (Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗.
On the other hand
arg min W
1 2 ‖Y −WB′‖2F + λ‖W‖∗ = arg min W 1 2 (Tr(Y Y ′) + Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 (Tr(WW ′)− 2Tr(Y BW ′)) + λ‖W‖∗
= arg min W
1 2 ‖Y B −W‖2F + λ‖W‖∗
= Sλ(Y B),
as desired.
Proof of Lemma 2. We refer the readers to the proof in Mazumder et al. (2010, Section 4, Lemma 3).
Proof of Lemma 3. First we argue that µ(k)λ = arg minµ fλ(W (k) λ , µ) and the first inequality immediately follows. We have
arg min µ fλ(W (k) λ , µ) = arg minµ ‖PΩ(Y −W (k)λ B ′ − µIS)‖2F
= arg min µ ∑ (i,j)∈Ω∩ΩS ((Y −W (k)λ B ′)i,j − µ)2.
Taking derivative with respect to µ directly gives µ(k)λ = arg minµ fλ(W (k) λ , µ), as desired.
For the rest two inequalities, notice that
fλ(W (k) λ , µ (k−1) λ ) =
1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k−1)λ IS)‖ 2 F + λ‖W (k) λ ‖∗
≤ 1 2 ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′)−W (k)λ B ′‖2F + λ‖W (k) λ ‖∗ (A.2)
= Qλ(W (k) λ |W (k−1) λ , µ (k−1) λ ) ≤ Qλ(W (k−1)λ |W (k−1) λ , µ (k−1) λ ) (A.3)
= 1 2 ‖PΩ(Y −W (k−1)λ B ′ − µ(k−1)λ IS)‖ 2 F + λ‖W (k−1) λ ‖∗ = fλ(W (k−1) λ , µ (k−1) λ ).
Here the (A.2) holds because we have
1 2 ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′)−W (k)λ B ′‖2F
= 1 2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′) + P⊥Ω (W (k−1) λ B ′ −W (k)λ B ′)‖2F = 1
2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′)‖2F + 1 2 ‖P⊥Ω (W (k−1) λ B ′ −W (k)λ B ′)‖2F
≥1 2 ‖PΩ(Y − µ(k−1)λ IS −W (k) λ B ′)‖2F .
(A.3) follows from the fact that W (k)λ = arg minW Qλ(W |W (k−1) λ , µ (k−1) λ ).
Proof of Lemma 4. First we analyze the behavior of {µ(k)λ },
fλ(W (k) λ , µ (k−1) λ )− fλ(W (k) λ , µ (k) λ ) =
1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k−1)λ IS)‖ 2 F
− 1 2 ‖PΩ(Y −W (k)λ B ′ − µ(k)λ IS)‖ 2 F = |S ∩ ΩS |
2 (µ
(k) λ − µ (k−1) λ ) 2.
Meanwhile, the sequence (· · · , fλ(W (k−1)λ , µ (k−1) λ ), fλ(W (k) λ , µ (k−1) λ ), fλ(W (k) λ , µ (k) λ ), · · · ) is decreasing and lower bounded by 0 and therefore converge to a non-negative number, yielding the differences fλ(W (k) λ , µ (k−1) λ )− fλ(W (k) λ , µ (k) λ )→ 0 as k →∞. Hence
µ (k) λ − µ (k−1) λ → 0, (A.4)
as desired.
The sequence {W (k)λ } is slightly more complicated, direct calculation gives
‖W (k)λ −W (k−1) λ ‖ 2 F = ‖Sλ(PΩ(Y − µ (k−1) λ IS) + P ⊥ Ω (W (k−1) λ B ′))
− Sλ(PΩ(Y − µ(k−2)λ IS) + P ⊥ Ω (W (k−2) λ B ′))‖2F ≤ ‖PΩ(Y − µ(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′) (A.5) − PΩ(Y − µ(k−2)λ IS)− P ⊥ Ω (W (k−2) λ B
′)‖2F = |Ω ∩ ΩS |(µ(k−1)λ − µ (k−2) λ ) 2 + ‖P⊥Ω (W (k−1) λ B ′ −W (k−2)λ B ′)‖2F , (A.6)
where (A.5) follows from Lemma 2, (A.6) can be derived pairing the 4 terms according to PΩ and P⊥Ω .
By definition of µ(k)λ , we have
|Ω ∩ ΩS |(µ(k−1)λ − µ (k−2) λ ) 2 = 1
|Ω ∩ ΩS | ∑ (i,j)∈Ω∩ΩS (W (k−1) λ B ′ −W (k−2)λ B ′)i,j 2
≤ ‖PΩ(W (k−1)λ B ′ −W (k−2)λ B ′)‖2F , (A.7)
where (A.7) follows from the Cauchy-Schwartz inequality.
Combining (A.6) with (A.7), we get
‖W (k)λ −W (k−1) λ ‖ 2 F ≤ ‖W (k−1) λ B ′ −W (k−2)λ B ′‖2F = ‖W (k−1) λ −W (k−2) λ ‖ 2 F .
Now we are left to prove that the difference sequence {W (k)λ −W (k−1) λ } converges to zero. Combining (A.4) and (A.7) it suffices to prove that ‖P⊥Ω (W (k−1) λ B ′ −W (k−2)λ B′)‖2F → 0. We have
fλ(W (k−1) λ , µ (k−2) λ )−Qλ(W (k−1) λ |W (k−2) λ , µ (k−2) λ ) = −‖P ⊥ Ω (W (k−1) λ B ′ −W (k−2)λ B ′)‖2F ,
and the left hand side converges to 0 because
0 ≥ fλ(W (k−1)λ , µ (k−2) λ )−Qλ(W (k−1) λ |W (k−2) λ , µ (k−2) λ )
≥ fλ(W (k−2)λ , µ (k−2) λ )− fλ(W (k−1) λ , µ (k−2) λ )→ 0,
which completes the proof.
Proof of Lemma 5. Let (µ(mk)λ ,W (mk) λ )→ (µ̂λ, Ŵλ), then Lemma 4 gives (µ (mk−1) λ ,W (mk−1) λ )→ (µ̂λ, Ŵλ). Since we have
W (mk) λ = Sλ((PΩ(Y − µ (mk−1) λ IS) + P ⊥ Ω (W (mk−1) λ B ′))B),
µ (mk) λ =
∑ (i,j)∈Ω∩ΩS (Y −W (mk−1) λ B ′)i,j
|Ω ∩ ΩS | .
Taking limits on both sides gives us the desire result.
Proof of Theorem 1. Let (µ̂λ, Ŵλ) be one limit point then we have:
‖Ŵλ −W (k)λ ‖F = ‖Sλ((PΩ(Y − µ̂λIS) + P ⊥ Ω (WλB ′))B) (A.8)
− Sλ((PΩ(Y − µ̂(k−1)λ IS) + P ⊥ Ω (W (k−1) λ B ′))B)‖2F ≤ |Ω ∩ ΩS |(µ̂λ − µ(k−1)λ ) 2 + ‖P⊥Ω ((Ŵλ −W (k−1) λ )B ′)‖2F , (A.9)
here (A.8) uses Lemma 5 and (A.9) uses Lemma 2. Meanwhile,
|Ω ∩ ΩS |(µ̂λ − µ(k−1)λ ) 2 =
∑ (i,j)∈Ω∩ΩS ((Ŵλ −W (k−1) λ )B ′)2i,j
|Ω ∩ ΩS | ≤ ‖PΩ((Ŵλ −W (k−1)λ )B ′)‖2F .
(A.10)
Combining (A.9) and (A.10), we have
‖Ŵλ −W (k)λ ‖F ≤ ‖Ŵλ −W (k−1) λ ‖F .
Hence the sequence ‖Ŵλ −W (k)λ ‖F is monotonically decreasing and has a limit. But since there exists W (mk)λ converging to Ŵλ, the limit equals 0, which proves W (k) λ → Ŵλ, µ (k) λ → µ̂.
Therefore we have proved the sequence (µ(k)λ ,W (k) λ ) always converges.
Meanwhile, the first part of equation 4.1 and Lemma 5 in Mazumder et al. (2010) guarantees ~0 ∈ ∂W fλ(Ŵλ, µ̂). By taking derivative directly we have 0 = ∂µfλ(Ŵλ, µ̂). Therefore (Ŵλ, µ̂) is a stationary point for fλ(W,µ). Notice that the loss function fλ(W,µ) is a convex function with respect to (W,µ). Thus we have proved that the limit point (Ŵλ, µ̂) minimizes the function fλ(W,µ).
B DATA GENERATION
Let G be the grid of T equidistributed points and let B be the basis of K spline functions evaluated on grid G. We will simulate the N ×K observation matrix Y with three parts
Y = WB + µIS + E ,
where W follows a mixture-Gaussian distribution with low rank structure, IS is the treatment matrix with uniformly distributed starting time and E represents the i.i.d. measurement error. The specific procedures is described below.
1. Generating W given parameters κ ∈ (0, 1), r1, r2 ∈ R, s1, s2 ∈ RK≥0: (a) Sample two K ×K orthogonal matrices V1, V2 via singular-value-decomposing two
random matrix. (b) Sample two unit length K vectors ~γ1, ~γ2 via normalizing i.i.d. normal samples.
(c) Draw vector ~t ∈ RN from i.i.d. Bernoulli(κ) samples. Denote the all one vector by ~1. (d) Draw N ×K matrices U1, U2 from i.i.d. standard normal random variables. (e) Set
W ← ~t · [ r1~γ1 + U1 diag[ √ s1]V1 ] + (~1− ~t) · [ r2~γ2 + U2 diag[ √ s2]V2 ] ,
where diag[s] is the diagonal matrix with diagonal elements s, “·” represents coordinatewise multiplication, and we are recycling ~t,~1− ~t and ri~γi to match the dimension.
2. Generating IS given parameter ptr ∈ (0, 1). (a) For each k = 1, . . . , N , sample Tk uniformly at random from {1, . . . , bT/ptrc}. (b) Set IS ← (1{j ≥ Ti})1≤i≤N,1≤j≤T .
3. Given parameter ∈ R≥0, E is drawn from from i.i.d. Normal(0, 2) samples. 4. Given parameter µ ∈ R, let Y0 ←WB + µIS + E . 5. Given parameter ρ ∈ (0, 1), drawn 0-1 matrix IΩ from i.i.d. Bernoulli(ρ) samples. Let Ω
denote the set of non-zero entries of IΩ, namely, the set of observed data. Set Y ← (Yij)1≤i≤N,1≤j≤T , where Yij = {
(Y0)ij if (IΩ)ij = 1 NA otherwise .
In actual simulation, we fix the auxiliary parameters as follows,
K = 7, T = 51, N = 500,
κ = 0.33, r1 = 1, r2 = 2,
s1 = [1, 0.4, 0.005, 0.1 exp(−3), ..., 0.1 exp(−K + 1)], s2 = [1.3, 0.2, 0.005, 0.1 exp(−3), ..., 0.1 exp(−K + 1)], ptr = 0.8, = 0.5.
The remaining parameters are treatment effect µ and observation rate ρ, which we allow to vary across different trials. | 1. What is the main contribution of the paper regarding modeling disease progression?
2. What are the limitations of the proposed framework, particularly in terms of practical applications?
3. Are there any concerns regarding the novelty of the proposed approach compared to prior works?
4. How does the reviewer assess the quality of the writing and clarity of the paper's content?
5. Are there any specific questions or areas where the reviewer would like further explanation or improvement in the paper? | Review | Review
This paper proposed a framework for modeling disease progression taking into account the treatment events.
Comments:
1. There is no significant theoretical innovation in this paper. Specifically, the learning approach for optimizing the parameter set W in the presence of \mu is not much different from the one proposed by (Kidzinski & Hastie, 2018) and the optimal value of \mu is simply the solution for L2 loss function. Besides, the narrative of section 2 shares too much similarity with the context of Kidzinski & Hastie, 2018.
2. The practical significance of the paper is rather limited considering that the framework can only take into account one specific treatment at a time when training a model. There are usually different treatments for a certain disease and it is hard to control enough patients to all take the same treatment. Sometimes, even the same patient could take multiple treatments at the same time or along his trajectory.
3. The paper is poorly written. There are too many grammatical errors. Some sentences are hard to understand. E.g. “Treatments of various types, however, rely on human decisions and to some extent, an exogenous variable to the development of disease. Thus they need to modeled differently.” should be “Treatments of various types, however, rely on human decisions and to some extent, are exogenous variables to the development of disease. Thus they need to be modeled differently.”. “we add a penalty term on the nuclear norm of W .”, Do you mean “we add a penalty term which is the nuclear norm of W .”? Cause I don’t see any additional penalty term in the nuclear norm compared with equation 2.2. “...where (I_S)_{i,j} = 1 if and only T_i ≥ s(i)...” should be “...where (I_S)_{i,j} = 1 if and only if T_i ≥ s(i)...”. “Finally we show that if the sequence {(\mu_{\lambda}^{(k)}, W_{\lambda}^{(k)} )}_k, it has to converge to a solution of (4.1).” Grammatically speaking, the if clause is not even a complete clause. |
ICLR | Title
Adaptive Stacked Graph Filter
Abstract
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
N/A
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
1 INTRODUCTION
The semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributed graphs has become one of the most fundamental machine learning problems in recent years. This problem is often associated with its most popular recent solution, namely Graph Convolutional Networks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of research to improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well as performance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).
Existing vertex classification models often (implicitly) assume that the graph has large vertex homophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019); see Section 2.1 for graph frequency. However, this assumption is not true in general. For instance, let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff, courses, and projects. These categories naturally exhibit different frequency patterns1. Connections between people are often low-frequency, while connections between topics and projects are often midrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset; for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).
This paper aims at establishing a GCN model for the vertex classification problem (Definition 1) that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure.
Contributions. By observing the relation between label frequency and performance of existing GCN-like models, we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, which is capable of learning any filter function. Our stacked filter construction with novel learnable filter parameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. By using only one hyper-parameter setting, we show that our model is more adaptive than existing work on a wide range of benchmark datasets.
The rest of our paper is organized as follows. Section 2 introduces notations and analytical tools. Section 3 provides insights into the vertex classification problem and motivations to our model’s design. Section 4 presents an implementation of our model. Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models. Section 6 compares our model and other existing methods empirically. We also provide additional experimental results in Appendix A.
1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2.
2 PRELIMINARIES
We consider a simple undirected graph G = (V,E), where V = {1, . . . , n} is a set of n vertices and E ⊆ V × V is a set of edges. A graph G is called an attributed graph, denoted by G(X), when it is associated with a vertex feature mapping X : V 7→ Rd, where d is the dimension of the features. We define the following vertex classification problem, also known in the literature as the semi-supervised vertex classification problem (Yang et al., 2016). Definition 1 (Vertex Classification Problem). We are given an attributed graph G(X), a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C, and label set C. The task is to find a model h : V → C using the training data (Vtr, Ytr) that approximates the true labeling function Y : V → C.
Let A be the adjacency matrix of the graph G, i.e., Ai,j = 1 if (i, j) ∈ E and 0 otherwise. Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag(d1, . . . , dn) be the n × n diagonal matrix of degrees. Let L = D −A be the combinatorial graph Laplacian. Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties: (1) its eigenvalues range from 0 to 2; and (2) the spectral properties can be compared between different graphs (Chung & Graham, 1997). In recent literature, the normalized adjacency matrix with added self-loops, Ã = I −L+ c, is often used as the propagation matrix, where c is some diagonal matrix.
2.1 GRAPH FREQUENCY
Graph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signal processing to graphs using the graph Laplacian. Let L = UΛU> be the eigendecomposition of the Laplacian, where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues. Then, we can regard each eigenvector uk as a “oscillation pattern” and its eigenvalue λk as the “frequency” of the oscillation. This intuition is supported by the Rayleigh quotient as follows.
r(L, x) , x >Lx x>x =
∑ u∼v Lu,v(x(u)− x(v))2∑
u∈V x(u) 2
. (1)
where ∑ u∼v sums over all unordered pairs for which u and v are adjacent, x(u) denotes the entry of vector x corresponding to vertex u, and Lu,v is the (u, v)-entry of L. From the definition we see that r(x) is non-negative and L is positive semi-definite. r(x) is also known as a variational characterization of eigenvalues of L (Horn & Johnson, 2012, Chapter 4), hence 0 ≤ r(x) ≤ 2 for any non-zero real vector x. We use the notation r(x) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context. The Rayleigh quotient r(x) measures how the data x is oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient” interchangeably. By the definition, the eigenvector ui has the frequency of λi.
The labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik, 2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, used in network science, correspond to low-frequency and high-frequency, respectively.
2.2 GRAPH FILTERING
In classical signal processing, a given signal is processed by filters in order to remove unwanted interference. Here, we first design a frequency response f(λ) of the filter, and then apply the filter to the signal in the sense that each frequency component x̂(λ) of the data is modulated as f(λ)x̂(λ). Graph signal processing extends this concept as follows. Same as in classical signal processing, we design a filter f(λ). Then, we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui. Then, we modulate each frequency component
by f(λ) as x = ∑ i f(λi)xiui. An important fact is that this can be done without performing the eigendecomposition explicitly. Let f(L) be the matrix function induced from f(λ). Then, the filter is represented by f(L)x. As an extension of signal processing, graph signal processing deals with signals defined on graphs. In definition 1, each column of the feature matrix X ∈ Rn×d is a “graph signal”. Let L = UΛU> be
the eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors. Signal X is filtered by function f of the eigenvalues as follow.
X̄ = Uf(Λ)U>X = f(L)X (2)
In general, different implementations of f(L) lead to different graph convolution models. For instance, GCN and SGC (Wu et al., 2019) are implemented by f(L) = (I−L+(D+ I)−1/2L(D+ I)−1/2)k, where the constant term stems from the fact that self-loops are added to vertices and k is the filter order. Generally, the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation. The filter in GCN is called a low-pass filter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019).
3 SPECTRAL PROPERTIES OF FILTERS
Towards building a ubiquitous solution, we take an intermediate step to study the vertex classification problem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumption is commonly made. However, the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns. Table 1 shows three groups of datasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell, Texas) have mixed label frequencies; some labels have low frequencies while others have midrange frequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filtering function f(λ) in a similar way as proposed by Defferrard et al. (2016).
The filtering function f(λ) is often approximated using a polynomial of the graph Laplacian as
f(L) ≈ poly(L) = K∑ i=0 θiLi. (3)
Because polynomials can uniformly approximate any real continuous function on a compact interval (see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.
Kipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approximated a graph filter gθ by Chebyshev polynomials Tk as
gθ ∗ x ≈ K∑ k=0 θkTk(D −1/2AD−1/2)x. (4)
Then, they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7: gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ (2IN − L) (5)
Finally, they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as
Z = D̃−1/2ÃD̃−1/2XΘ (6)
Kipf & Welling (2017) claimed that the weight matrix Θ can learn different filters, and subsequent works (e.g., (Veličković et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by Θ. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the construction suggest, a GCN layer only represents a filter of the form f(λ) ≈ 2− λ. To properly learn different graph filters, we should learn the multiplying parameters θ0, θ1, . . . , θK in equation 3. In the next section, we propose a learning model which directly learns these multiplying parameters.
4 MODEL DESCRIPTION
The previous discussion provided several insights: (1) Vertex classification model’s frequency is decided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directly learning the polynomial filter’s coefficients is more desirable if we do not want to make any frequency assumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF) model. Figure 1 visually describes SGF.
Design decisions. The novelty of our model is the stacked filter, and we directly learn the filtering function by filter coefficients α and β, which makes SGF work well universally without frequency hyper-parameters. The deep filter module consists of filters stacked on top of each other with skipconnections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: α` and β` which control the shape of the linear filter (Figure 1). Two learnable linear layers Win and Wout with a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).
The input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the input signals (vertex features) are passed through a learning weight, then fed into filtering. The output part of our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filtered signals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down (SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our model performs filtering (propagation) on the latent representation and classifies the filtered representation, whereas APPNP propagates the predicted features and SGC classifies the filtered features.
From the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al., 2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polynomial basis is often used in signal processing because it provides optimal interpolation points (Cheney, 1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polynomial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the Stacked Filter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshev polynomial basis approach has similar performance to the stacked approach with one slight caveat on choosing λmax. We empirically show this problem by setting the scaling factor λmax = 1.5. Note that, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assuming λmax = 2 so all eigenvalues stay in [−1, 1].
Given an instance of Problem 1, let σ be an activation function (e.g., ReLU), Ã = I − (D + I)−1/2L(D + I)−1/2 be the augmented adjacency matrix, α` and β` be the filter parameters at layer `, a K-layer SGF is given by:
SGF: Input à SGF: Input L
H0 = σ(XWin) H0 = σ(XWin)
H` = α`ÃH`−1 + β`H0, ` = 1 . . .K H` = α`LH`−1 + β`H0, ` = 1 . . .K ŷ = HKWout ŷ = HKWout
SGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution to Problem 1. We present our models using the augmented adjacency matrix to show its similarity to existing literature. However, as noted in Figure 1, we can replace à with L.
The stacked filter is easy to implement. Moreover, it can learn any polynomial of order-K as follows. The closed-form of the stacked filter (Figure 1) is given by
βKI + K∑ i=1 ( K∏ j=i αj)βi−1LK−i+1 (7)
where β0 = 1. Because each term of equation 7 contains a unique parameter, we obtain the following. Proposition 2. Any polynomial poly(L) of order K can be represented by the form equation 7.
Note that the same result holds if we replace L in equation 7 by Ã. In practice, we typically set the initial values of αi = 0.5 and update them via the back-propagation. The learned αi is then likely to satisfy |αi| < 1, which yields a further property of the stacked filter: it prefers a lowdegree filter, because the coefficients of the higher-order terms are higher-order in αi which vanishes exponentially faster. This advantage is relevant when we compare with a trivial implementation of the polynomial filter that learns θi directly (this approach corresponds to horizontal stacking and ChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations and confirm that the stacked filter is more robust in terms of filter degree than the trivial implementation.
5 RELATED WORK
GCN-like models cover a subset of an increasingly large literature on graph-structured data learning with graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classification and graph classification are the two main benchmark problems. The principles for representation learning behind modern graph learning models can also be split into two views: graph propagation/diffusion and graph signal filtering. In this section, we briefly summarize recent advances in the vertex classification problem with a focus on propagation and filtering methods. For a more comprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and also recent workshops on graph representation learning2.
Feature Propagation. Feature propagation/message-passing and graph signal filtering are two equivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017). From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchers focus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al. (2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices. More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifier part (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguish between 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregate features from random subgraphs to further improve their model’s expressivity. Pei et al. (2020) proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknesses of GCN-like models. Most notably, they discussed the relation between network homophily and GCN’s performance, which is similar to label frequency r(Y ) in Table 1. Spinelli et al. (2020) introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops” to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, they still use a fully-connected layer to implement the halting criteria, which controls feature propagation. AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficients θ directly. However their construction only allows for binary coefficients3. We later show that full horizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomial order than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address the difficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolution and the attention mechanism, which has a “reconnecting” effect to increase homophily.
Graph Filtering. GCN-like models can also be viewed as graph signal filters where vertex feature vectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrard et al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses label efficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using
2See, e.g., https://grlplus.github.io/ 3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-
mentation used GCNConv (which is I − L+ c) from pytorch-geometric.
traditional signal processing techniques. For example, the Lanczos algorithm is applied in learning graph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neural networks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also follow the decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deep GCN named GCNII which holds the current best results for original splits of Cora, Citeseer, and Pubmed. They further showed that their model can estimate any filter function with an assumption that the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2).
6 EXPERIMENTAL RESULTS
We conduct experiments on benchmark and synthetic data to empirically evaluate our proposed models. First, we compare our models with several existing models in terms of average classification accuracy. Our experimental results show that our single model can perform well across all frequency ranges. Second, we plot the learned filter functions of our model to show that our model can learn the frequency range from the data — such visualization is difficult in existing works as the models’ filters are fixed before the training process.
6.1 DATASETS
We use three groups of datasets corresponding to three types of label frequency (low, midrange, high). The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer, Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchur et al., 2018). The second group is network datasets with midrange label frequency (close to 1): Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The last group consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset, we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge density of 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of these datasets; see Appendix B.3 for more detail.
6.2 VERTEX CLASSIFICATION
We compare our method with some of the best models in the current literature. Two layers MLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020), JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently among the best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) and set λmax = 1.5. The Literature section of Table 2 and 3 shows the best results found in the literature where these models are set at the recommended hyper-parameters and recommended variants for each dataset. In our experiment, we fix the graph-related hyper-parameters of each model and report the classification results. Our model contains 16 layers of stacked filters (Ã) and has 64 hidden dimensions. Learning rate is set at 0.01, weight decay is 5e × 10−4, and dropout rate for linear
layers is 0.7. From an intuition that the filter should discover the required frequency pattern before the linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate. This experimental setup shows that SGF can adapt to the label frequency without setting specific hyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On the other hand, in Table 3, SGF is not only better than others in our experiments but also surpassing the best results in the literature. Note that we also the exact same SGF model across all experiments.
Results in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitive to its parameters α and θ. In our experiment, we fix the θ parameter to 0.5 for all datasets, while in their manuscript the recommended values are around 1.5 depending on the dataset. With the recommended hyper-parameters, GCNII can achieve the average accuracy of 81.57% on Wisconsin data. However, its performance dropped around 3 ∼ 10% with different θ values. This comparison highlights our model’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.
The Chebyshev polynomial basis performs comparably to the staking implementation as we discussed in the previous sections. The value λmax = 1.5 is choosen because the typical maximum eigenvalue of real-world networks are often at this value. However, in practice, one should set λmax = 2 as
0
3
6 Acc: 87.1
0
6
12 Acc: 89.0 x10
-4
-2
1 Acc: 100 x10
0
2
4 Init.
0
2
4 Init.
Cora Wisconsin Bipartite
0
0.8
1.6 Acc: 100
x10
3
5
7 Acc: 84.4 Acc: 79.2 x10
1.1
2.0
0.2
discussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numerical instability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis. Since for vertex classification any polynomial basis is equivalent, numerical stable ones like our implementation of SGF is certainly more preferable in practice.
6.3 FILTER VISUALIZATION
Another advantage of our model is the ability to visualize the filter function using an inversion of Proposition 2. The first row of Figure 2 shows the filtering functions at initialization and after training when input is the normalized augmented adjacency matrix. The second row shows the results when the input is the normalized Laplacian matrix. These two cases can be interpreted as starting with a low-pass filter (Ã) or starting with a high-pass filter (L). Figure 2 clearly shows that our method can learn the suitable filtering shapes from data regardless of the initialization. We expect the visualization here can be used as an effective exploratory tool and baseline method for future graph data.
6.4 ADAPTIVITY TO STRUCTURAL NOISE
Recently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graph neural network for graph classification. Zügner et al. (2018) posed a similar problem related to adversarial attack on graphs by perturbations of vertex feature or graph structure for the vertex classification setting (Dai et al., 2018; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2019). Here, we evaluate the robustness of the models against the structural noise, where we perturb a fraction of edges while preserving the degree sequence4. This structural noise collapses the relation between the features and the graph structure; hence, it makes the dataset to have the midrange frequency. This experimental setting shows that adaptive models like ours and GCNII are more robust to structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models are at least as good as an MLP on vertex features. Figure 3 shows vertex classification results at each amount of edge perturbation: from 10% to 90%. APPNP with α = 0.2 and SGC with k = 2 have similar behavior under structural noise since these models give more weights to filtered features. On the other hand, APPNP with α = 0.8 is much more robust to structural noise as it depends more on the vertex features. This result suggests that adaptive models like ours and GCNII can be a good baseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).
6.5 DYNAMICS OF α’S AND β’S
In addition to Section 6.3, this section studies the dynamic of α and β during training for two representative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of α and β in SGF (Ã) every 20 training epochs and plot the result. Figure 4 shows the values of α and β in 16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Cora dataset, we see that the over-smoothing effect is quickly migrated as the α’s automatically go to zero with the exception of the last three layers. Similarly, the weights for skip-connections – β’s – quickly
4https://en.wikipedia.org/wiki/Degree-preserving_randomization
go to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there is almost no filtering because all α’s go to zero quickly and there is only one active skip-connection in the last layer. This single active skip-connection phenomenon is further confirmed by the experiment on MLP (Table 3) where MLP performed comparably to graph-based models. These results further explained the ability to adapt of our model.
Additional Experiments. We provide several other experimental results in Appendix A. Section A.1 discusses the advantages of vertical stacking (SGF) versus a naı̈ve horizontal stacking (learning θ in equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range (Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additional experiments where α’s and β’s are initialized randomly. We show that our model is still adaptive even with uniform [−1, 1] initialization.
7 CONCLUSION
We show that simply by learning the polynomial coefficients rather the linear layers in the formulation of GCN can lead to a highly adaptive vertex classification model. Our experiment shows that by using only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore, SGF can also adapt to structural noise extremely well, promising a robust model in practice. Since our objective is to relax the frequency assumption, one could expect our model will perform weakly when number of training data is limited. Because the estimation of label frequency becomes difficult with a small number of data (Appendix A.2), designing a learning model that is both adaptive and data-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a more involved filter learning scheme is needed to address this problem in the future.
A EXTRA EXPERIMENTAL RESULTS
A.1 VERTICAL AND HORIZONTAL STACKING
Horizontal stacking is equivalent to learning θ’s in Equation 3 directly instead of stacking them vertically. In parallel to our work, Chien et al. (2020) explored the horizontal stacking idea with the pagerank matrix instead of the Laplacian matrix discussed here. We find that both vertical and horizontal can learn degree K polynomial, but vertical stacking naturally robust to the high order terms. Horizontally stacked filter even loses its ability to adapt when learning order 64 polynomials. Table 4 shows a comparison between vertical stacking (SGF) and horizontal stacking. We also report the average number of iteration until early stopping and average training time per epoch for the 64 filters case. All hyper-parameters are the same as in Table 2 and 3. Figure 5 gives an example of 4 layers stacking to clarify the difference between horizontal and vertical.
A.2 RAYLEIGH QUOTIENT ESTIMATION FROM TRAINING DATA
To obtain an accurate classification solution, the frequency of the model’s output must be close to the frequency of the true labels as follows. Proposition 3. Let ŷ, y ∈ RN be unit length vectors whose signs of entries indicate predicted labels and true labels for vertices in graph G. Let L ∈ Rn×n be the symmetric normalized graph Laplacian of graph G. Suppose the graph frequency gap is at least δ: |r(ŷ) − r(y)| = |ŷ>Lŷ − y>Ly| ≥ δ. Then we have:
||ŷ − y||22 ≥ δ/4 (8)
This proposition explains that a model designed for a specific frequency range (e.g., GCN, SGC, GAT, APPNP, etc for low-frequency range) gives a poor performance on the other frequency ranges. This proposition also leads us a method to seek a model (i.e., a filter) whose output matches the frequency of the true labels. Because the true label frequency is unknown in practice, we must estimate this quantity from the training data. Below, we discuss the difficulty of this estimation.
A naı̈ve strategy of estimating the frequency is to compute Rayleigh quotient on the training set. However, training features X and training labels yn often have Rayleigh quotient close to 1 (as shown in Table 1 for r(X)), and Figure 7 (Appendix) shows the results when we compute the Rayleigh quotient of labels based on training data. This means that a naı̈ve strategy yields undesirable results and we need some involved process of estimating the frequency.
If we can assume that (1) Training vertices are sampled i.i.d., and (2) we know the number of vertices in the whole graph (N = |V |), we can obtain an unbiased estimation of the frequency of the true labels as follows.
Proposition 4. Let p be the proportion of vertices will be used as training data, q be the proportion of label y, N be the total number of vertices in the graph, Ln be the symmetric normalized Laplacian of the subgraph induced by the training vertices, and yn be the training labels. Assuming the training set is obtained by sampling the vertices i.i.d. with probability p, we can estimate the Rayleigh quotient of true labels by
E(r(yn)) = 4N−1p−2 ( y>n Lnyn − (1− p)y>n diag(Ln)yn ) (9)
Figure 6 shows an unbiased estimation results using Proposition 4. Unfortunately, at 10% training ratio, the observed variances are high across datasets; thus, we conclude that estimating the label frequency is generally difficult, especially for small training data.
Thus far, we have shown that estimating label’s frequency given limited training data is difficult even with an unbiased estimator. The high data efficiency of GCN-like models could be contributed to the fact that they already assume the labels are low frequency. Without such assumption, we need more data in order to correctly estimate the frequency patterns.
A.3 RANDOM INITIALIZATION
While the main content of our paper showed the results for α and β initialized at 0.5, our results generally hold even if we initialize them randomly. Table 5 demonstrates this claim by showing our model’s performance with α and β initialized randomly. SGF (0.5) is the setting showed in the main part of our paper. SGF (U[-1,1]) initializes α and β using a uniform distribution in [-1,1].
Both Table 5 and Figure 8 show that our model behaves similar to the fixed initialization at 0.5. It is worthwhile to mention that Figure 8a and 8b show SGF initialized randomly at the same seed but converged to two different solutions. The accuracies for these two particular cases are 89.7% for Cora nd 92.0% for Wisconsin. This result and the filter visualization in Section 6.3 refute the argument that our model is also biased toward ”low-frequency”.
SGF (0.5) 88.97 ± 1.21 77.58 ± 1.11 90.12 ± 0.40 95.58 ± 0.55 92.15 ± 0.41 SGF (U[-1,1]) 88.47 ± 1.40 77.50 ± 1.88 88.23 ± 1.12 92.23 ± 0.53 87.15 ± 3.63
Wisconsin Cornell Texas Chameleon Bipartite
SGF (0.5) 87.06 ± 4.66 82.45 ± 6.19 80.56 ± 5.63 58.77 ± 1.90 100.0 ± 0.00 SGF (U[-1,1]) 88.66 ± 3.40 79.13 ± 1.60 79.67 ± 3.62 57.83 ± 2.47 100.0 ± 0.00
B EXPERIMENTAL DETAILS
B.1 SOURCE CODE
The source code is provided in src.zip. The instruction to install Python environment and running examples can be found in README.md. All results in this paper are obtained using a single machine with an RTX Titan GPU (24GB). We also confirm the results on CPU and another machine with a GeForce 1080Ti GPU (11GB). The provided source code works on both CPU and GPU.
B.2 EVALUATION PROCEDURE
For each dataset and each run, the following training procedure is implemented: Split, use train and validation vertices to estimate Rayleigh quotient; train the model with train set and choose the hyper-parameters using validation set, the hyper-parameters are dropout rate, learning rate, and number of layers; save the model every time best validation accuracy is reached; load the best model on validation set to evaluate on test set. Search set for each hyper-parameters:
• Dropout rate: {0.4, 0.5, 0.6, 0.7, 0.8} • Weight decay : {1e− 2, 1e− 3, 5e-4, 1e− 4, 5e− 5} • Learing rate: {0.001, 0.01, 0.02, 0.1} • Number of layers: {4, 8, 16, 32, 64}
We use the hyper-parameters in bold text to report the result in the main part of our paper.
B.3 DATA SOURCE
Our datasets are obtained from the pytorch-geometric repository and the node-classification-dataset repository on GitHub. These datasets are “re-packed” with pickle and stored in src/data. The original URLs are:
• https://github.com/rusty1s/pytorch geometric • https://github.com/ryutamatsuno/node-classification-dataset
Citation Networks. Cora (ML), Citeseer, and Pubmed (Sen et al., 2008) are the set of three most commonly used networks for benchmarking vertex classification models. Vertices in these graphs represent papers, and each of them has a bag-of-word vector indicating the content of the paper. Edges are citations between papers. Originally these edges are directed, but they are converted to undirected edges in the trade-off between information loss and efficiency of methods.
WebKB. WebKB dataset is a collection of university websites collected by CMU5. As we mentioned in previous sections, this dataset is special because it contains many different types of vertices that have mixed frequencies. We use the Wisconsin, Cornel, Texas subsets of this dataset.
Wikipedia. The Chameleon dataset belongs to a collection of Wikipedia pages where edges are references and vertex labels indicate the internet traffic. Originally this dataset was created for the vertex regression task, but here, we follow Pei et al. (2020) to split the traffic amount into 5 categories.
The synthetic dataset is generated using NetworkX library and labeled by its bipartite parts. The features are generated randomly with Gaussian N (0, 1).
B.4 OTHER METHODS
Other methods are obtained from their respective repository on GitHub. The following are parameter settings for the “Our experiment” section of Table 2 and 3. Since each dataset has a different hyperparameter values, we follow the author’s recommendation for the hyper-parameter not mentioned here. We confirm the results with the recommended hyper-parameters and report them in the “Literature” sections.
5http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb
• GCNII: θ = 0.5, α = 0.5. • SGC: k = 2, lr = 0.01, wd = 5× 10−4, dropout = 0.7. • APPNP: K = 2, α = 0.2 and 0.8, lr = 0.02, wd = 5× 10−4, dropout = 0.7. • SGF-Cheby (our implementaion): λmax = {1.5, 2.0}, K = 16 and other hyper-parameters
are the same as SGF. | 1. What is the main contribution of the paper regarding vertex classification?
2. What are the strengths of the proposed architecture, particularly in terms of numerical experiments?
3. Do you have any concerns about the limitation of the contribution, especially compared to prior works?
4. How do you assess the significance of the paper's focus on learning polynomial coefficients versus other approaches?
5. What are your suggestions for improving the paper, such as obtaining additional theoretical results or discussing the robustness of the architecture? | Review | Review
SUMMARY: This paper addresses the problem of vertex classification using a new Graph Convolutional Neural Network (NN) architecture. The linear operator within each of the layers of the GNNN is formed by a polynomial graph filter (i.e., a matrix polynomial of either the adjacency or the Laplacian novelty). Rather than working on the frequency domain, the paper focuses on learning the polynomial coefficients of the filter on the vertex domain. The key novelty is the consideration of a stack architecture for which the polynomial filter is formed by the successive application (i.e., matrix multiplication) of filters of order one. Numerical experiments with real datasets showcase the merits, including superior classification performance, of the proposed architecture.
STRONG POINTS: The paper is timely and fits nicely the scope of the conference.
The numerical experiments are convincing, offering insights, and demonstrating some of the advantages of the proposed architecture.
The writing is clear, making the paper easy to follow.
WEAK POINTS: Except for the numerical experiments, I find that the contribution is quite limited. The postulation of GCNN architectures based on polynomial graph filters where the focus is on learning the polynomial coefficients has been studied thoroughly in the literature. In general, the paper does a good job listing relevant works in that area, although some are missing (e.g., Gama - Ribeiro). Some of the existing works look at ARMA structures and recursive order-one filter implementations. I acknowledge that the architecture considered in those papers may not be exactly the same as the one proposed by the authors in this paper. I also appreciate that the application at hand (vertex classification) was not the goal of many of those papers. However, I still feel that the contribution falls short, especially for a top conference such as ICLR. In any case, I am open to change my mind if the authors are able to strengthen their theoretical claims or address my concerns in their rebuttal.
I believe that the title should be changed. GCNN are not mentioned. The current title places the focus on Stacked Graph Filters. My first concern is that, within the linear paradigm (i.e., as polynomials of the adjacency/Laplacian matrix), this type of architectures have already been investigated. More importantly, the paper focuses on NN architectures, so I think it is reasonable to have that on the title.
OVERALL RECOMMENDATION: Marginal reject. The paper is topical, timely, and nicely written. It addresses a problem of interest and does so with contemporary machine learning tools. The results in real-world datasets are convincing. However, the contribution and novelty are limited, falling short of the average contribution at ICLR.
ADDITIONAL RECOMMENDATIONS: Being able to obtain additional theoretical results would make the contribution more solid.
Further elaborating on the robustness of the architecture it is another change that would strengthen the manuscript. |
ICLR | Title
Adaptive Stacked Graph Filter
Abstract
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
N/A
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
1 INTRODUCTION
The semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributed graphs has become one of the most fundamental machine learning problems in recent years. This problem is often associated with its most popular recent solution, namely Graph Convolutional Networks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of research to improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well as performance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).
Existing vertex classification models often (implicitly) assume that the graph has large vertex homophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019); see Section 2.1 for graph frequency. However, this assumption is not true in general. For instance, let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff, courses, and projects. These categories naturally exhibit different frequency patterns1. Connections between people are often low-frequency, while connections between topics and projects are often midrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset; for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).
This paper aims at establishing a GCN model for the vertex classification problem (Definition 1) that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure.
Contributions. By observing the relation between label frequency and performance of existing GCN-like models, we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, which is capable of learning any filter function. Our stacked filter construction with novel learnable filter parameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. By using only one hyper-parameter setting, we show that our model is more adaptive than existing work on a wide range of benchmark datasets.
The rest of our paper is organized as follows. Section 2 introduces notations and analytical tools. Section 3 provides insights into the vertex classification problem and motivations to our model’s design. Section 4 presents an implementation of our model. Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models. Section 6 compares our model and other existing methods empirically. We also provide additional experimental results in Appendix A.
1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2.
2 PRELIMINARIES
We consider a simple undirected graph G = (V,E), where V = {1, . . . , n} is a set of n vertices and E ⊆ V × V is a set of edges. A graph G is called an attributed graph, denoted by G(X), when it is associated with a vertex feature mapping X : V 7→ Rd, where d is the dimension of the features. We define the following vertex classification problem, also known in the literature as the semi-supervised vertex classification problem (Yang et al., 2016). Definition 1 (Vertex Classification Problem). We are given an attributed graph G(X), a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C, and label set C. The task is to find a model h : V → C using the training data (Vtr, Ytr) that approximates the true labeling function Y : V → C.
Let A be the adjacency matrix of the graph G, i.e., Ai,j = 1 if (i, j) ∈ E and 0 otherwise. Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag(d1, . . . , dn) be the n × n diagonal matrix of degrees. Let L = D −A be the combinatorial graph Laplacian. Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties: (1) its eigenvalues range from 0 to 2; and (2) the spectral properties can be compared between different graphs (Chung & Graham, 1997). In recent literature, the normalized adjacency matrix with added self-loops, Ã = I −L+ c, is often used as the propagation matrix, where c is some diagonal matrix.
2.1 GRAPH FREQUENCY
Graph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signal processing to graphs using the graph Laplacian. Let L = UΛU> be the eigendecomposition of the Laplacian, where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues. Then, we can regard each eigenvector uk as a “oscillation pattern” and its eigenvalue λk as the “frequency” of the oscillation. This intuition is supported by the Rayleigh quotient as follows.
r(L, x) , x >Lx x>x =
∑ u∼v Lu,v(x(u)− x(v))2∑
u∈V x(u) 2
. (1)
where ∑ u∼v sums over all unordered pairs for which u and v are adjacent, x(u) denotes the entry of vector x corresponding to vertex u, and Lu,v is the (u, v)-entry of L. From the definition we see that r(x) is non-negative and L is positive semi-definite. r(x) is also known as a variational characterization of eigenvalues of L (Horn & Johnson, 2012, Chapter 4), hence 0 ≤ r(x) ≤ 2 for any non-zero real vector x. We use the notation r(x) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context. The Rayleigh quotient r(x) measures how the data x is oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient” interchangeably. By the definition, the eigenvector ui has the frequency of λi.
The labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik, 2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, used in network science, correspond to low-frequency and high-frequency, respectively.
2.2 GRAPH FILTERING
In classical signal processing, a given signal is processed by filters in order to remove unwanted interference. Here, we first design a frequency response f(λ) of the filter, and then apply the filter to the signal in the sense that each frequency component x̂(λ) of the data is modulated as f(λ)x̂(λ). Graph signal processing extends this concept as follows. Same as in classical signal processing, we design a filter f(λ). Then, we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui. Then, we modulate each frequency component
by f(λ) as x = ∑ i f(λi)xiui. An important fact is that this can be done without performing the eigendecomposition explicitly. Let f(L) be the matrix function induced from f(λ). Then, the filter is represented by f(L)x. As an extension of signal processing, graph signal processing deals with signals defined on graphs. In definition 1, each column of the feature matrix X ∈ Rn×d is a “graph signal”. Let L = UΛU> be
the eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors. Signal X is filtered by function f of the eigenvalues as follow.
X̄ = Uf(Λ)U>X = f(L)X (2)
In general, different implementations of f(L) lead to different graph convolution models. For instance, GCN and SGC (Wu et al., 2019) are implemented by f(L) = (I−L+(D+ I)−1/2L(D+ I)−1/2)k, where the constant term stems from the fact that self-loops are added to vertices and k is the filter order. Generally, the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation. The filter in GCN is called a low-pass filter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019).
3 SPECTRAL PROPERTIES OF FILTERS
Towards building a ubiquitous solution, we take an intermediate step to study the vertex classification problem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumption is commonly made. However, the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns. Table 1 shows three groups of datasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell, Texas) have mixed label frequencies; some labels have low frequencies while others have midrange frequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filtering function f(λ) in a similar way as proposed by Defferrard et al. (2016).
The filtering function f(λ) is often approximated using a polynomial of the graph Laplacian as
f(L) ≈ poly(L) = K∑ i=0 θiLi. (3)
Because polynomials can uniformly approximate any real continuous function on a compact interval (see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.
Kipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approximated a graph filter gθ by Chebyshev polynomials Tk as
gθ ∗ x ≈ K∑ k=0 θkTk(D −1/2AD−1/2)x. (4)
Then, they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7: gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ (2IN − L) (5)
Finally, they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as
Z = D̃−1/2ÃD̃−1/2XΘ (6)
Kipf & Welling (2017) claimed that the weight matrix Θ can learn different filters, and subsequent works (e.g., (Veličković et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by Θ. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the construction suggest, a GCN layer only represents a filter of the form f(λ) ≈ 2− λ. To properly learn different graph filters, we should learn the multiplying parameters θ0, θ1, . . . , θK in equation 3. In the next section, we propose a learning model which directly learns these multiplying parameters.
4 MODEL DESCRIPTION
The previous discussion provided several insights: (1) Vertex classification model’s frequency is decided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directly learning the polynomial filter’s coefficients is more desirable if we do not want to make any frequency assumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF) model. Figure 1 visually describes SGF.
Design decisions. The novelty of our model is the stacked filter, and we directly learn the filtering function by filter coefficients α and β, which makes SGF work well universally without frequency hyper-parameters. The deep filter module consists of filters stacked on top of each other with skipconnections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: α` and β` which control the shape of the linear filter (Figure 1). Two learnable linear layers Win and Wout with a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).
The input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the input signals (vertex features) are passed through a learning weight, then fed into filtering. The output part of our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filtered signals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down (SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our model performs filtering (propagation) on the latent representation and classifies the filtered representation, whereas APPNP propagates the predicted features and SGC classifies the filtered features.
From the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al., 2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polynomial basis is often used in signal processing because it provides optimal interpolation points (Cheney, 1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polynomial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the Stacked Filter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshev polynomial basis approach has similar performance to the stacked approach with one slight caveat on choosing λmax. We empirically show this problem by setting the scaling factor λmax = 1.5. Note that, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assuming λmax = 2 so all eigenvalues stay in [−1, 1].
Given an instance of Problem 1, let σ be an activation function (e.g., ReLU), Ã = I − (D + I)−1/2L(D + I)−1/2 be the augmented adjacency matrix, α` and β` be the filter parameters at layer `, a K-layer SGF is given by:
SGF: Input à SGF: Input L
H0 = σ(XWin) H0 = σ(XWin)
H` = α`ÃH`−1 + β`H0, ` = 1 . . .K H` = α`LH`−1 + β`H0, ` = 1 . . .K ŷ = HKWout ŷ = HKWout
SGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution to Problem 1. We present our models using the augmented adjacency matrix to show its similarity to existing literature. However, as noted in Figure 1, we can replace à with L.
The stacked filter is easy to implement. Moreover, it can learn any polynomial of order-K as follows. The closed-form of the stacked filter (Figure 1) is given by
βKI + K∑ i=1 ( K∏ j=i αj)βi−1LK−i+1 (7)
where β0 = 1. Because each term of equation 7 contains a unique parameter, we obtain the following. Proposition 2. Any polynomial poly(L) of order K can be represented by the form equation 7.
Note that the same result holds if we replace L in equation 7 by Ã. In practice, we typically set the initial values of αi = 0.5 and update them via the back-propagation. The learned αi is then likely to satisfy |αi| < 1, which yields a further property of the stacked filter: it prefers a lowdegree filter, because the coefficients of the higher-order terms are higher-order in αi which vanishes exponentially faster. This advantage is relevant when we compare with a trivial implementation of the polynomial filter that learns θi directly (this approach corresponds to horizontal stacking and ChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations and confirm that the stacked filter is more robust in terms of filter degree than the trivial implementation.
5 RELATED WORK
GCN-like models cover a subset of an increasingly large literature on graph-structured data learning with graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classification and graph classification are the two main benchmark problems. The principles for representation learning behind modern graph learning models can also be split into two views: graph propagation/diffusion and graph signal filtering. In this section, we briefly summarize recent advances in the vertex classification problem with a focus on propagation and filtering methods. For a more comprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and also recent workshops on graph representation learning2.
Feature Propagation. Feature propagation/message-passing and graph signal filtering are two equivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017). From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchers focus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al. (2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices. More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifier part (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguish between 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregate features from random subgraphs to further improve their model’s expressivity. Pei et al. (2020) proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknesses of GCN-like models. Most notably, they discussed the relation between network homophily and GCN’s performance, which is similar to label frequency r(Y ) in Table 1. Spinelli et al. (2020) introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops” to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, they still use a fully-connected layer to implement the halting criteria, which controls feature propagation. AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficients θ directly. However their construction only allows for binary coefficients3. We later show that full horizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomial order than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address the difficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolution and the attention mechanism, which has a “reconnecting” effect to increase homophily.
Graph Filtering. GCN-like models can also be viewed as graph signal filters where vertex feature vectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrard et al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses label efficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using
2See, e.g., https://grlplus.github.io/ 3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-
mentation used GCNConv (which is I − L+ c) from pytorch-geometric.
traditional signal processing techniques. For example, the Lanczos algorithm is applied in learning graph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neural networks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also follow the decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deep GCN named GCNII which holds the current best results for original splits of Cora, Citeseer, and Pubmed. They further showed that their model can estimate any filter function with an assumption that the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2).
6 EXPERIMENTAL RESULTS
We conduct experiments on benchmark and synthetic data to empirically evaluate our proposed models. First, we compare our models with several existing models in terms of average classification accuracy. Our experimental results show that our single model can perform well across all frequency ranges. Second, we plot the learned filter functions of our model to show that our model can learn the frequency range from the data — such visualization is difficult in existing works as the models’ filters are fixed before the training process.
6.1 DATASETS
We use three groups of datasets corresponding to three types of label frequency (low, midrange, high). The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer, Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchur et al., 2018). The second group is network datasets with midrange label frequency (close to 1): Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The last group consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset, we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge density of 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of these datasets; see Appendix B.3 for more detail.
6.2 VERTEX CLASSIFICATION
We compare our method with some of the best models in the current literature. Two layers MLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020), JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently among the best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) and set λmax = 1.5. The Literature section of Table 2 and 3 shows the best results found in the literature where these models are set at the recommended hyper-parameters and recommended variants for each dataset. In our experiment, we fix the graph-related hyper-parameters of each model and report the classification results. Our model contains 16 layers of stacked filters (Ã) and has 64 hidden dimensions. Learning rate is set at 0.01, weight decay is 5e × 10−4, and dropout rate for linear
layers is 0.7. From an intuition that the filter should discover the required frequency pattern before the linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate. This experimental setup shows that SGF can adapt to the label frequency without setting specific hyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On the other hand, in Table 3, SGF is not only better than others in our experiments but also surpassing the best results in the literature. Note that we also the exact same SGF model across all experiments.
Results in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitive to its parameters α and θ. In our experiment, we fix the θ parameter to 0.5 for all datasets, while in their manuscript the recommended values are around 1.5 depending on the dataset. With the recommended hyper-parameters, GCNII can achieve the average accuracy of 81.57% on Wisconsin data. However, its performance dropped around 3 ∼ 10% with different θ values. This comparison highlights our model’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.
The Chebyshev polynomial basis performs comparably to the staking implementation as we discussed in the previous sections. The value λmax = 1.5 is choosen because the typical maximum eigenvalue of real-world networks are often at this value. However, in practice, one should set λmax = 2 as
0
3
6 Acc: 87.1
0
6
12 Acc: 89.0 x10
-4
-2
1 Acc: 100 x10
0
2
4 Init.
0
2
4 Init.
Cora Wisconsin Bipartite
0
0.8
1.6 Acc: 100
x10
3
5
7 Acc: 84.4 Acc: 79.2 x10
1.1
2.0
0.2
discussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numerical instability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis. Since for vertex classification any polynomial basis is equivalent, numerical stable ones like our implementation of SGF is certainly more preferable in practice.
6.3 FILTER VISUALIZATION
Another advantage of our model is the ability to visualize the filter function using an inversion of Proposition 2. The first row of Figure 2 shows the filtering functions at initialization and after training when input is the normalized augmented adjacency matrix. The second row shows the results when the input is the normalized Laplacian matrix. These two cases can be interpreted as starting with a low-pass filter (Ã) or starting with a high-pass filter (L). Figure 2 clearly shows that our method can learn the suitable filtering shapes from data regardless of the initialization. We expect the visualization here can be used as an effective exploratory tool and baseline method for future graph data.
6.4 ADAPTIVITY TO STRUCTURAL NOISE
Recently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graph neural network for graph classification. Zügner et al. (2018) posed a similar problem related to adversarial attack on graphs by perturbations of vertex feature or graph structure for the vertex classification setting (Dai et al., 2018; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2019). Here, we evaluate the robustness of the models against the structural noise, where we perturb a fraction of edges while preserving the degree sequence4. This structural noise collapses the relation between the features and the graph structure; hence, it makes the dataset to have the midrange frequency. This experimental setting shows that adaptive models like ours and GCNII are more robust to structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models are at least as good as an MLP on vertex features. Figure 3 shows vertex classification results at each amount of edge perturbation: from 10% to 90%. APPNP with α = 0.2 and SGC with k = 2 have similar behavior under structural noise since these models give more weights to filtered features. On the other hand, APPNP with α = 0.8 is much more robust to structural noise as it depends more on the vertex features. This result suggests that adaptive models like ours and GCNII can be a good baseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).
6.5 DYNAMICS OF α’S AND β’S
In addition to Section 6.3, this section studies the dynamic of α and β during training for two representative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of α and β in SGF (Ã) every 20 training epochs and plot the result. Figure 4 shows the values of α and β in 16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Cora dataset, we see that the over-smoothing effect is quickly migrated as the α’s automatically go to zero with the exception of the last three layers. Similarly, the weights for skip-connections – β’s – quickly
4https://en.wikipedia.org/wiki/Degree-preserving_randomization
go to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there is almost no filtering because all α’s go to zero quickly and there is only one active skip-connection in the last layer. This single active skip-connection phenomenon is further confirmed by the experiment on MLP (Table 3) where MLP performed comparably to graph-based models. These results further explained the ability to adapt of our model.
Additional Experiments. We provide several other experimental results in Appendix A. Section A.1 discusses the advantages of vertical stacking (SGF) versus a naı̈ve horizontal stacking (learning θ in equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range (Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additional experiments where α’s and β’s are initialized randomly. We show that our model is still adaptive even with uniform [−1, 1] initialization.
7 CONCLUSION
We show that simply by learning the polynomial coefficients rather the linear layers in the formulation of GCN can lead to a highly adaptive vertex classification model. Our experiment shows that by using only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore, SGF can also adapt to structural noise extremely well, promising a robust model in practice. Since our objective is to relax the frequency assumption, one could expect our model will perform weakly when number of training data is limited. Because the estimation of label frequency becomes difficult with a small number of data (Appendix A.2), designing a learning model that is both adaptive and data-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a more involved filter learning scheme is needed to address this problem in the future.
A EXTRA EXPERIMENTAL RESULTS
A.1 VERTICAL AND HORIZONTAL STACKING
Horizontal stacking is equivalent to learning θ’s in Equation 3 directly instead of stacking them vertically. In parallel to our work, Chien et al. (2020) explored the horizontal stacking idea with the pagerank matrix instead of the Laplacian matrix discussed here. We find that both vertical and horizontal can learn degree K polynomial, but vertical stacking naturally robust to the high order terms. Horizontally stacked filter even loses its ability to adapt when learning order 64 polynomials. Table 4 shows a comparison between vertical stacking (SGF) and horizontal stacking. We also report the average number of iteration until early stopping and average training time per epoch for the 64 filters case. All hyper-parameters are the same as in Table 2 and 3. Figure 5 gives an example of 4 layers stacking to clarify the difference between horizontal and vertical.
A.2 RAYLEIGH QUOTIENT ESTIMATION FROM TRAINING DATA
To obtain an accurate classification solution, the frequency of the model’s output must be close to the frequency of the true labels as follows. Proposition 3. Let ŷ, y ∈ RN be unit length vectors whose signs of entries indicate predicted labels and true labels for vertices in graph G. Let L ∈ Rn×n be the symmetric normalized graph Laplacian of graph G. Suppose the graph frequency gap is at least δ: |r(ŷ) − r(y)| = |ŷ>Lŷ − y>Ly| ≥ δ. Then we have:
||ŷ − y||22 ≥ δ/4 (8)
This proposition explains that a model designed for a specific frequency range (e.g., GCN, SGC, GAT, APPNP, etc for low-frequency range) gives a poor performance on the other frequency ranges. This proposition also leads us a method to seek a model (i.e., a filter) whose output matches the frequency of the true labels. Because the true label frequency is unknown in practice, we must estimate this quantity from the training data. Below, we discuss the difficulty of this estimation.
A naı̈ve strategy of estimating the frequency is to compute Rayleigh quotient on the training set. However, training features X and training labels yn often have Rayleigh quotient close to 1 (as shown in Table 1 for r(X)), and Figure 7 (Appendix) shows the results when we compute the Rayleigh quotient of labels based on training data. This means that a naı̈ve strategy yields undesirable results and we need some involved process of estimating the frequency.
If we can assume that (1) Training vertices are sampled i.i.d., and (2) we know the number of vertices in the whole graph (N = |V |), we can obtain an unbiased estimation of the frequency of the true labels as follows.
Proposition 4. Let p be the proportion of vertices will be used as training data, q be the proportion of label y, N be the total number of vertices in the graph, Ln be the symmetric normalized Laplacian of the subgraph induced by the training vertices, and yn be the training labels. Assuming the training set is obtained by sampling the vertices i.i.d. with probability p, we can estimate the Rayleigh quotient of true labels by
E(r(yn)) = 4N−1p−2 ( y>n Lnyn − (1− p)y>n diag(Ln)yn ) (9)
Figure 6 shows an unbiased estimation results using Proposition 4. Unfortunately, at 10% training ratio, the observed variances are high across datasets; thus, we conclude that estimating the label frequency is generally difficult, especially for small training data.
Thus far, we have shown that estimating label’s frequency given limited training data is difficult even with an unbiased estimator. The high data efficiency of GCN-like models could be contributed to the fact that they already assume the labels are low frequency. Without such assumption, we need more data in order to correctly estimate the frequency patterns.
A.3 RANDOM INITIALIZATION
While the main content of our paper showed the results for α and β initialized at 0.5, our results generally hold even if we initialize them randomly. Table 5 demonstrates this claim by showing our model’s performance with α and β initialized randomly. SGF (0.5) is the setting showed in the main part of our paper. SGF (U[-1,1]) initializes α and β using a uniform distribution in [-1,1].
Both Table 5 and Figure 8 show that our model behaves similar to the fixed initialization at 0.5. It is worthwhile to mention that Figure 8a and 8b show SGF initialized randomly at the same seed but converged to two different solutions. The accuracies for these two particular cases are 89.7% for Cora nd 92.0% for Wisconsin. This result and the filter visualization in Section 6.3 refute the argument that our model is also biased toward ”low-frequency”.
SGF (0.5) 88.97 ± 1.21 77.58 ± 1.11 90.12 ± 0.40 95.58 ± 0.55 92.15 ± 0.41 SGF (U[-1,1]) 88.47 ± 1.40 77.50 ± 1.88 88.23 ± 1.12 92.23 ± 0.53 87.15 ± 3.63
Wisconsin Cornell Texas Chameleon Bipartite
SGF (0.5) 87.06 ± 4.66 82.45 ± 6.19 80.56 ± 5.63 58.77 ± 1.90 100.0 ± 0.00 SGF (U[-1,1]) 88.66 ± 3.40 79.13 ± 1.60 79.67 ± 3.62 57.83 ± 2.47 100.0 ± 0.00
B EXPERIMENTAL DETAILS
B.1 SOURCE CODE
The source code is provided in src.zip. The instruction to install Python environment and running examples can be found in README.md. All results in this paper are obtained using a single machine with an RTX Titan GPU (24GB). We also confirm the results on CPU and another machine with a GeForce 1080Ti GPU (11GB). The provided source code works on both CPU and GPU.
B.2 EVALUATION PROCEDURE
For each dataset and each run, the following training procedure is implemented: Split, use train and validation vertices to estimate Rayleigh quotient; train the model with train set and choose the hyper-parameters using validation set, the hyper-parameters are dropout rate, learning rate, and number of layers; save the model every time best validation accuracy is reached; load the best model on validation set to evaluate on test set. Search set for each hyper-parameters:
• Dropout rate: {0.4, 0.5, 0.6, 0.7, 0.8} • Weight decay : {1e− 2, 1e− 3, 5e-4, 1e− 4, 5e− 5} • Learing rate: {0.001, 0.01, 0.02, 0.1} • Number of layers: {4, 8, 16, 32, 64}
We use the hyper-parameters in bold text to report the result in the main part of our paper.
B.3 DATA SOURCE
Our datasets are obtained from the pytorch-geometric repository and the node-classification-dataset repository on GitHub. These datasets are “re-packed” with pickle and stored in src/data. The original URLs are:
• https://github.com/rusty1s/pytorch geometric • https://github.com/ryutamatsuno/node-classification-dataset
Citation Networks. Cora (ML), Citeseer, and Pubmed (Sen et al., 2008) are the set of three most commonly used networks for benchmarking vertex classification models. Vertices in these graphs represent papers, and each of them has a bag-of-word vector indicating the content of the paper. Edges are citations between papers. Originally these edges are directed, but they are converted to undirected edges in the trade-off between information loss and efficiency of methods.
WebKB. WebKB dataset is a collection of university websites collected by CMU5. As we mentioned in previous sections, this dataset is special because it contains many different types of vertices that have mixed frequencies. We use the Wisconsin, Cornel, Texas subsets of this dataset.
Wikipedia. The Chameleon dataset belongs to a collection of Wikipedia pages where edges are references and vertex labels indicate the internet traffic. Originally this dataset was created for the vertex regression task, but here, we follow Pei et al. (2020) to split the traffic amount into 5 categories.
The synthetic dataset is generated using NetworkX library and labeled by its bipartite parts. The features are generated randomly with Gaussian N (0, 1).
B.4 OTHER METHODS
Other methods are obtained from their respective repository on GitHub. The following are parameter settings for the “Our experiment” section of Table 2 and 3. Since each dataset has a different hyperparameter values, we follow the author’s recommendation for the hyper-parameter not mentioned here. We confirm the results with the recommended hyper-parameters and report them in the “Literature” sections.
5http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb
• GCNII: θ = 0.5, α = 0.5. • SGC: k = 2, lr = 0.01, wd = 5× 10−4, dropout = 0.7. • APPNP: K = 2, α = 0.2 and 0.8, lr = 0.02, wd = 5× 10−4, dropout = 0.7. • SGF-Cheby (our implementaion): λmax = {1.5, 2.0}, K = 16 and other hyper-parameters
are the same as SGF. | 1. What is the focus of the paper, and what are the proposed contributions?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its similarity to other works in literature?
3. How does the reviewer assess the experimental results and their presentation in the paper?
4. Are there any questions or concerns regarding the paper's formulation, such as the horizontal stacking variant or the hyperparameter selection procedure?
5. Are there any minor issues or typos in the paper that the reviewer has noticed? | Review | Review
Adaptive stacked graph filter The paper proposes a relatively simple formulation for a graph convolutional filter, that has the advantage of providing useful insights on the characteristic of the considered datasets. Many points of the paper are however not convincing in the present form, mainly regarding the novelty of the proposed formulation.
The paper proposes a graph convolution operator that is inspired by the well-known approximation of a graph filter using polynomials of the graph Laplacian.
Pros:
The paper proposes a simple filter formulation that allows to study the dependency on the neighborhood radius on different datasets.
The visualisation of the filters is interesting. -The reported experimental results are positive, even though in many cases the improvement does not seem significant.
Cons: -The proposed model is very similar to GCNII: Graph convolution by Kipf and Welling with a single scalar parameters instead of a parameter matrix + skip connections. The main difference with GCNII is the lack of the identity mapping. In fact, eq. of H^l in page 4 is very similar to eq. 5 in https://arxiv.org/pdf/2007.02133.pdf. Authors should deeply discuss the differences between their proposal and other works in literature, clarifying their novel contribution.
Comments about specific sections follow. Experimental section: -In page 6, authors state that they fix the \theta hyper-parameter of GCNII to 0.5, even though the recommended values are around 1.5. Can you justify this choice? Also, since you run the experiments on GCNII, it would be interesting to see its performance on the bipartite dataset with \theta = 1.5 -In Table 3, the results from literature do not report the variance. In general, it seems like the results of the proposed method and baselines are pretty close, and in many cases inside the variance range.
Appendix A: the horizontal stacking variant is not explained in detail. From the figure it looks like several stacked layers with an aggregation that sums the weighted representation computed at each layer. I don't see why this should be "horizontal". Probably writing down the equations of this model would help.
B.2. While authors state that for each dataset and for each run they select the hyper-parameters using the validation set, later in the same section they state that the results in the main paper are referred to the hyper-parameters in bold. I don't understand how the hyper-parameter selection procedure is adopted.
Minor: Table 3, Chamaleon dataset. Missing bold on SGC. Texas: MLP is in bold while it shouldn't be Page 6: "Note that we also the extact" -> we use the
-----REBUTTAL I acknowledge having checked authors' rebuttal and the revised version of the manuscript |
ICLR | Title
Adaptive Stacked Graph Filter
Abstract
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
N/A
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
1 INTRODUCTION
The semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributed graphs has become one of the most fundamental machine learning problems in recent years. This problem is often associated with its most popular recent solution, namely Graph Convolutional Networks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of research to improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well as performance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).
Existing vertex classification models often (implicitly) assume that the graph has large vertex homophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019); see Section 2.1 for graph frequency. However, this assumption is not true in general. For instance, let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff, courses, and projects. These categories naturally exhibit different frequency patterns1. Connections between people are often low-frequency, while connections between topics and projects are often midrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset; for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).
This paper aims at establishing a GCN model for the vertex classification problem (Definition 1) that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure.
Contributions. By observing the relation between label frequency and performance of existing GCN-like models, we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, which is capable of learning any filter function. Our stacked filter construction with novel learnable filter parameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. By using only one hyper-parameter setting, we show that our model is more adaptive than existing work on a wide range of benchmark datasets.
The rest of our paper is organized as follows. Section 2 introduces notations and analytical tools. Section 3 provides insights into the vertex classification problem and motivations to our model’s design. Section 4 presents an implementation of our model. Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models. Section 6 compares our model and other existing methods empirically. We also provide additional experimental results in Appendix A.
1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2.
2 PRELIMINARIES
We consider a simple undirected graph G = (V,E), where V = {1, . . . , n} is a set of n vertices and E ⊆ V × V is a set of edges. A graph G is called an attributed graph, denoted by G(X), when it is associated with a vertex feature mapping X : V 7→ Rd, where d is the dimension of the features. We define the following vertex classification problem, also known in the literature as the semi-supervised vertex classification problem (Yang et al., 2016). Definition 1 (Vertex Classification Problem). We are given an attributed graph G(X), a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C, and label set C. The task is to find a model h : V → C using the training data (Vtr, Ytr) that approximates the true labeling function Y : V → C.
Let A be the adjacency matrix of the graph G, i.e., Ai,j = 1 if (i, j) ∈ E and 0 otherwise. Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag(d1, . . . , dn) be the n × n diagonal matrix of degrees. Let L = D −A be the combinatorial graph Laplacian. Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties: (1) its eigenvalues range from 0 to 2; and (2) the spectral properties can be compared between different graphs (Chung & Graham, 1997). In recent literature, the normalized adjacency matrix with added self-loops, Ã = I −L+ c, is often used as the propagation matrix, where c is some diagonal matrix.
2.1 GRAPH FREQUENCY
Graph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signal processing to graphs using the graph Laplacian. Let L = UΛU> be the eigendecomposition of the Laplacian, where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues. Then, we can regard each eigenvector uk as a “oscillation pattern” and its eigenvalue λk as the “frequency” of the oscillation. This intuition is supported by the Rayleigh quotient as follows.
r(L, x) , x >Lx x>x =
∑ u∼v Lu,v(x(u)− x(v))2∑
u∈V x(u) 2
. (1)
where ∑ u∼v sums over all unordered pairs for which u and v are adjacent, x(u) denotes the entry of vector x corresponding to vertex u, and Lu,v is the (u, v)-entry of L. From the definition we see that r(x) is non-negative and L is positive semi-definite. r(x) is also known as a variational characterization of eigenvalues of L (Horn & Johnson, 2012, Chapter 4), hence 0 ≤ r(x) ≤ 2 for any non-zero real vector x. We use the notation r(x) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context. The Rayleigh quotient r(x) measures how the data x is oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient” interchangeably. By the definition, the eigenvector ui has the frequency of λi.
The labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik, 2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, used in network science, correspond to low-frequency and high-frequency, respectively.
2.2 GRAPH FILTERING
In classical signal processing, a given signal is processed by filters in order to remove unwanted interference. Here, we first design a frequency response f(λ) of the filter, and then apply the filter to the signal in the sense that each frequency component x̂(λ) of the data is modulated as f(λ)x̂(λ). Graph signal processing extends this concept as follows. Same as in classical signal processing, we design a filter f(λ). Then, we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui. Then, we modulate each frequency component
by f(λ) as x = ∑ i f(λi)xiui. An important fact is that this can be done without performing the eigendecomposition explicitly. Let f(L) be the matrix function induced from f(λ). Then, the filter is represented by f(L)x. As an extension of signal processing, graph signal processing deals with signals defined on graphs. In definition 1, each column of the feature matrix X ∈ Rn×d is a “graph signal”. Let L = UΛU> be
the eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors. Signal X is filtered by function f of the eigenvalues as follow.
X̄ = Uf(Λ)U>X = f(L)X (2)
In general, different implementations of f(L) lead to different graph convolution models. For instance, GCN and SGC (Wu et al., 2019) are implemented by f(L) = (I−L+(D+ I)−1/2L(D+ I)−1/2)k, where the constant term stems from the fact that self-loops are added to vertices and k is the filter order. Generally, the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation. The filter in GCN is called a low-pass filter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019).
3 SPECTRAL PROPERTIES OF FILTERS
Towards building a ubiquitous solution, we take an intermediate step to study the vertex classification problem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumption is commonly made. However, the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns. Table 1 shows three groups of datasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell, Texas) have mixed label frequencies; some labels have low frequencies while others have midrange frequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filtering function f(λ) in a similar way as proposed by Defferrard et al. (2016).
The filtering function f(λ) is often approximated using a polynomial of the graph Laplacian as
f(L) ≈ poly(L) = K∑ i=0 θiLi. (3)
Because polynomials can uniformly approximate any real continuous function on a compact interval (see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.
Kipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approximated a graph filter gθ by Chebyshev polynomials Tk as
gθ ∗ x ≈ K∑ k=0 θkTk(D −1/2AD−1/2)x. (4)
Then, they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7: gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ (2IN − L) (5)
Finally, they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as
Z = D̃−1/2ÃD̃−1/2XΘ (6)
Kipf & Welling (2017) claimed that the weight matrix Θ can learn different filters, and subsequent works (e.g., (Veličković et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by Θ. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the construction suggest, a GCN layer only represents a filter of the form f(λ) ≈ 2− λ. To properly learn different graph filters, we should learn the multiplying parameters θ0, θ1, . . . , θK in equation 3. In the next section, we propose a learning model which directly learns these multiplying parameters.
4 MODEL DESCRIPTION
The previous discussion provided several insights: (1) Vertex classification model’s frequency is decided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directly learning the polynomial filter’s coefficients is more desirable if we do not want to make any frequency assumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF) model. Figure 1 visually describes SGF.
Design decisions. The novelty of our model is the stacked filter, and we directly learn the filtering function by filter coefficients α and β, which makes SGF work well universally without frequency hyper-parameters. The deep filter module consists of filters stacked on top of each other with skipconnections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: α` and β` which control the shape of the linear filter (Figure 1). Two learnable linear layers Win and Wout with a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).
The input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the input signals (vertex features) are passed through a learning weight, then fed into filtering. The output part of our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filtered signals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down (SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our model performs filtering (propagation) on the latent representation and classifies the filtered representation, whereas APPNP propagates the predicted features and SGC classifies the filtered features.
From the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al., 2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polynomial basis is often used in signal processing because it provides optimal interpolation points (Cheney, 1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polynomial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the Stacked Filter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshev polynomial basis approach has similar performance to the stacked approach with one slight caveat on choosing λmax. We empirically show this problem by setting the scaling factor λmax = 1.5. Note that, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assuming λmax = 2 so all eigenvalues stay in [−1, 1].
Given an instance of Problem 1, let σ be an activation function (e.g., ReLU), Ã = I − (D + I)−1/2L(D + I)−1/2 be the augmented adjacency matrix, α` and β` be the filter parameters at layer `, a K-layer SGF is given by:
SGF: Input à SGF: Input L
H0 = σ(XWin) H0 = σ(XWin)
H` = α`ÃH`−1 + β`H0, ` = 1 . . .K H` = α`LH`−1 + β`H0, ` = 1 . . .K ŷ = HKWout ŷ = HKWout
SGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution to Problem 1. We present our models using the augmented adjacency matrix to show its similarity to existing literature. However, as noted in Figure 1, we can replace à with L.
The stacked filter is easy to implement. Moreover, it can learn any polynomial of order-K as follows. The closed-form of the stacked filter (Figure 1) is given by
βKI + K∑ i=1 ( K∏ j=i αj)βi−1LK−i+1 (7)
where β0 = 1. Because each term of equation 7 contains a unique parameter, we obtain the following. Proposition 2. Any polynomial poly(L) of order K can be represented by the form equation 7.
Note that the same result holds if we replace L in equation 7 by Ã. In practice, we typically set the initial values of αi = 0.5 and update them via the back-propagation. The learned αi is then likely to satisfy |αi| < 1, which yields a further property of the stacked filter: it prefers a lowdegree filter, because the coefficients of the higher-order terms are higher-order in αi which vanishes exponentially faster. This advantage is relevant when we compare with a trivial implementation of the polynomial filter that learns θi directly (this approach corresponds to horizontal stacking and ChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations and confirm that the stacked filter is more robust in terms of filter degree than the trivial implementation.
5 RELATED WORK
GCN-like models cover a subset of an increasingly large literature on graph-structured data learning with graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classification and graph classification are the two main benchmark problems. The principles for representation learning behind modern graph learning models can also be split into two views: graph propagation/diffusion and graph signal filtering. In this section, we briefly summarize recent advances in the vertex classification problem with a focus on propagation and filtering methods. For a more comprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and also recent workshops on graph representation learning2.
Feature Propagation. Feature propagation/message-passing and graph signal filtering are two equivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017). From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchers focus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al. (2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices. More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifier part (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguish between 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregate features from random subgraphs to further improve their model’s expressivity. Pei et al. (2020) proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknesses of GCN-like models. Most notably, they discussed the relation between network homophily and GCN’s performance, which is similar to label frequency r(Y ) in Table 1. Spinelli et al. (2020) introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops” to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, they still use a fully-connected layer to implement the halting criteria, which controls feature propagation. AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficients θ directly. However their construction only allows for binary coefficients3. We later show that full horizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomial order than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address the difficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolution and the attention mechanism, which has a “reconnecting” effect to increase homophily.
Graph Filtering. GCN-like models can also be viewed as graph signal filters where vertex feature vectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrard et al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses label efficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using
2See, e.g., https://grlplus.github.io/ 3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-
mentation used GCNConv (which is I − L+ c) from pytorch-geometric.
traditional signal processing techniques. For example, the Lanczos algorithm is applied in learning graph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neural networks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also follow the decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deep GCN named GCNII which holds the current best results for original splits of Cora, Citeseer, and Pubmed. They further showed that their model can estimate any filter function with an assumption that the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2).
6 EXPERIMENTAL RESULTS
We conduct experiments on benchmark and synthetic data to empirically evaluate our proposed models. First, we compare our models with several existing models in terms of average classification accuracy. Our experimental results show that our single model can perform well across all frequency ranges. Second, we plot the learned filter functions of our model to show that our model can learn the frequency range from the data — such visualization is difficult in existing works as the models’ filters are fixed before the training process.
6.1 DATASETS
We use three groups of datasets corresponding to three types of label frequency (low, midrange, high). The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer, Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchur et al., 2018). The second group is network datasets with midrange label frequency (close to 1): Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The last group consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset, we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge density of 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of these datasets; see Appendix B.3 for more detail.
6.2 VERTEX CLASSIFICATION
We compare our method with some of the best models in the current literature. Two layers MLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020), JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently among the best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) and set λmax = 1.5. The Literature section of Table 2 and 3 shows the best results found in the literature where these models are set at the recommended hyper-parameters and recommended variants for each dataset. In our experiment, we fix the graph-related hyper-parameters of each model and report the classification results. Our model contains 16 layers of stacked filters (Ã) and has 64 hidden dimensions. Learning rate is set at 0.01, weight decay is 5e × 10−4, and dropout rate for linear
layers is 0.7. From an intuition that the filter should discover the required frequency pattern before the linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate. This experimental setup shows that SGF can adapt to the label frequency without setting specific hyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On the other hand, in Table 3, SGF is not only better than others in our experiments but also surpassing the best results in the literature. Note that we also the exact same SGF model across all experiments.
Results in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitive to its parameters α and θ. In our experiment, we fix the θ parameter to 0.5 for all datasets, while in their manuscript the recommended values are around 1.5 depending on the dataset. With the recommended hyper-parameters, GCNII can achieve the average accuracy of 81.57% on Wisconsin data. However, its performance dropped around 3 ∼ 10% with different θ values. This comparison highlights our model’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.
The Chebyshev polynomial basis performs comparably to the staking implementation as we discussed in the previous sections. The value λmax = 1.5 is choosen because the typical maximum eigenvalue of real-world networks are often at this value. However, in practice, one should set λmax = 2 as
0
3
6 Acc: 87.1
0
6
12 Acc: 89.0 x10
-4
-2
1 Acc: 100 x10
0
2
4 Init.
0
2
4 Init.
Cora Wisconsin Bipartite
0
0.8
1.6 Acc: 100
x10
3
5
7 Acc: 84.4 Acc: 79.2 x10
1.1
2.0
0.2
discussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numerical instability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis. Since for vertex classification any polynomial basis is equivalent, numerical stable ones like our implementation of SGF is certainly more preferable in practice.
6.3 FILTER VISUALIZATION
Another advantage of our model is the ability to visualize the filter function using an inversion of Proposition 2. The first row of Figure 2 shows the filtering functions at initialization and after training when input is the normalized augmented adjacency matrix. The second row shows the results when the input is the normalized Laplacian matrix. These two cases can be interpreted as starting with a low-pass filter (Ã) or starting with a high-pass filter (L). Figure 2 clearly shows that our method can learn the suitable filtering shapes from data regardless of the initialization. We expect the visualization here can be used as an effective exploratory tool and baseline method for future graph data.
6.4 ADAPTIVITY TO STRUCTURAL NOISE
Recently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graph neural network for graph classification. Zügner et al. (2018) posed a similar problem related to adversarial attack on graphs by perturbations of vertex feature or graph structure for the vertex classification setting (Dai et al., 2018; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2019). Here, we evaluate the robustness of the models against the structural noise, where we perturb a fraction of edges while preserving the degree sequence4. This structural noise collapses the relation between the features and the graph structure; hence, it makes the dataset to have the midrange frequency. This experimental setting shows that adaptive models like ours and GCNII are more robust to structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models are at least as good as an MLP on vertex features. Figure 3 shows vertex classification results at each amount of edge perturbation: from 10% to 90%. APPNP with α = 0.2 and SGC with k = 2 have similar behavior under structural noise since these models give more weights to filtered features. On the other hand, APPNP with α = 0.8 is much more robust to structural noise as it depends more on the vertex features. This result suggests that adaptive models like ours and GCNII can be a good baseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).
6.5 DYNAMICS OF α’S AND β’S
In addition to Section 6.3, this section studies the dynamic of α and β during training for two representative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of α and β in SGF (Ã) every 20 training epochs and plot the result. Figure 4 shows the values of α and β in 16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Cora dataset, we see that the over-smoothing effect is quickly migrated as the α’s automatically go to zero with the exception of the last three layers. Similarly, the weights for skip-connections – β’s – quickly
4https://en.wikipedia.org/wiki/Degree-preserving_randomization
go to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there is almost no filtering because all α’s go to zero quickly and there is only one active skip-connection in the last layer. This single active skip-connection phenomenon is further confirmed by the experiment on MLP (Table 3) where MLP performed comparably to graph-based models. These results further explained the ability to adapt of our model.
Additional Experiments. We provide several other experimental results in Appendix A. Section A.1 discusses the advantages of vertical stacking (SGF) versus a naı̈ve horizontal stacking (learning θ in equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range (Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additional experiments where α’s and β’s are initialized randomly. We show that our model is still adaptive even with uniform [−1, 1] initialization.
7 CONCLUSION
We show that simply by learning the polynomial coefficients rather the linear layers in the formulation of GCN can lead to a highly adaptive vertex classification model. Our experiment shows that by using only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore, SGF can also adapt to structural noise extremely well, promising a robust model in practice. Since our objective is to relax the frequency assumption, one could expect our model will perform weakly when number of training data is limited. Because the estimation of label frequency becomes difficult with a small number of data (Appendix A.2), designing a learning model that is both adaptive and data-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a more involved filter learning scheme is needed to address this problem in the future.
A EXTRA EXPERIMENTAL RESULTS
A.1 VERTICAL AND HORIZONTAL STACKING
Horizontal stacking is equivalent to learning θ’s in Equation 3 directly instead of stacking them vertically. In parallel to our work, Chien et al. (2020) explored the horizontal stacking idea with the pagerank matrix instead of the Laplacian matrix discussed here. We find that both vertical and horizontal can learn degree K polynomial, but vertical stacking naturally robust to the high order terms. Horizontally stacked filter even loses its ability to adapt when learning order 64 polynomials. Table 4 shows a comparison between vertical stacking (SGF) and horizontal stacking. We also report the average number of iteration until early stopping and average training time per epoch for the 64 filters case. All hyper-parameters are the same as in Table 2 and 3. Figure 5 gives an example of 4 layers stacking to clarify the difference between horizontal and vertical.
A.2 RAYLEIGH QUOTIENT ESTIMATION FROM TRAINING DATA
To obtain an accurate classification solution, the frequency of the model’s output must be close to the frequency of the true labels as follows. Proposition 3. Let ŷ, y ∈ RN be unit length vectors whose signs of entries indicate predicted labels and true labels for vertices in graph G. Let L ∈ Rn×n be the symmetric normalized graph Laplacian of graph G. Suppose the graph frequency gap is at least δ: |r(ŷ) − r(y)| = |ŷ>Lŷ − y>Ly| ≥ δ. Then we have:
||ŷ − y||22 ≥ δ/4 (8)
This proposition explains that a model designed for a specific frequency range (e.g., GCN, SGC, GAT, APPNP, etc for low-frequency range) gives a poor performance on the other frequency ranges. This proposition also leads us a method to seek a model (i.e., a filter) whose output matches the frequency of the true labels. Because the true label frequency is unknown in practice, we must estimate this quantity from the training data. Below, we discuss the difficulty of this estimation.
A naı̈ve strategy of estimating the frequency is to compute Rayleigh quotient on the training set. However, training features X and training labels yn often have Rayleigh quotient close to 1 (as shown in Table 1 for r(X)), and Figure 7 (Appendix) shows the results when we compute the Rayleigh quotient of labels based on training data. This means that a naı̈ve strategy yields undesirable results and we need some involved process of estimating the frequency.
If we can assume that (1) Training vertices are sampled i.i.d., and (2) we know the number of vertices in the whole graph (N = |V |), we can obtain an unbiased estimation of the frequency of the true labels as follows.
Proposition 4. Let p be the proportion of vertices will be used as training data, q be the proportion of label y, N be the total number of vertices in the graph, Ln be the symmetric normalized Laplacian of the subgraph induced by the training vertices, and yn be the training labels. Assuming the training set is obtained by sampling the vertices i.i.d. with probability p, we can estimate the Rayleigh quotient of true labels by
E(r(yn)) = 4N−1p−2 ( y>n Lnyn − (1− p)y>n diag(Ln)yn ) (9)
Figure 6 shows an unbiased estimation results using Proposition 4. Unfortunately, at 10% training ratio, the observed variances are high across datasets; thus, we conclude that estimating the label frequency is generally difficult, especially for small training data.
Thus far, we have shown that estimating label’s frequency given limited training data is difficult even with an unbiased estimator. The high data efficiency of GCN-like models could be contributed to the fact that they already assume the labels are low frequency. Without such assumption, we need more data in order to correctly estimate the frequency patterns.
A.3 RANDOM INITIALIZATION
While the main content of our paper showed the results for α and β initialized at 0.5, our results generally hold even if we initialize them randomly. Table 5 demonstrates this claim by showing our model’s performance with α and β initialized randomly. SGF (0.5) is the setting showed in the main part of our paper. SGF (U[-1,1]) initializes α and β using a uniform distribution in [-1,1].
Both Table 5 and Figure 8 show that our model behaves similar to the fixed initialization at 0.5. It is worthwhile to mention that Figure 8a and 8b show SGF initialized randomly at the same seed but converged to two different solutions. The accuracies for these two particular cases are 89.7% for Cora nd 92.0% for Wisconsin. This result and the filter visualization in Section 6.3 refute the argument that our model is also biased toward ”low-frequency”.
SGF (0.5) 88.97 ± 1.21 77.58 ± 1.11 90.12 ± 0.40 95.58 ± 0.55 92.15 ± 0.41 SGF (U[-1,1]) 88.47 ± 1.40 77.50 ± 1.88 88.23 ± 1.12 92.23 ± 0.53 87.15 ± 3.63
Wisconsin Cornell Texas Chameleon Bipartite
SGF (0.5) 87.06 ± 4.66 82.45 ± 6.19 80.56 ± 5.63 58.77 ± 1.90 100.0 ± 0.00 SGF (U[-1,1]) 88.66 ± 3.40 79.13 ± 1.60 79.67 ± 3.62 57.83 ± 2.47 100.0 ± 0.00
B EXPERIMENTAL DETAILS
B.1 SOURCE CODE
The source code is provided in src.zip. The instruction to install Python environment and running examples can be found in README.md. All results in this paper are obtained using a single machine with an RTX Titan GPU (24GB). We also confirm the results on CPU and another machine with a GeForce 1080Ti GPU (11GB). The provided source code works on both CPU and GPU.
B.2 EVALUATION PROCEDURE
For each dataset and each run, the following training procedure is implemented: Split, use train and validation vertices to estimate Rayleigh quotient; train the model with train set and choose the hyper-parameters using validation set, the hyper-parameters are dropout rate, learning rate, and number of layers; save the model every time best validation accuracy is reached; load the best model on validation set to evaluate on test set. Search set for each hyper-parameters:
• Dropout rate: {0.4, 0.5, 0.6, 0.7, 0.8} • Weight decay : {1e− 2, 1e− 3, 5e-4, 1e− 4, 5e− 5} • Learing rate: {0.001, 0.01, 0.02, 0.1} • Number of layers: {4, 8, 16, 32, 64}
We use the hyper-parameters in bold text to report the result in the main part of our paper.
B.3 DATA SOURCE
Our datasets are obtained from the pytorch-geometric repository and the node-classification-dataset repository on GitHub. These datasets are “re-packed” with pickle and stored in src/data. The original URLs are:
• https://github.com/rusty1s/pytorch geometric • https://github.com/ryutamatsuno/node-classification-dataset
Citation Networks. Cora (ML), Citeseer, and Pubmed (Sen et al., 2008) are the set of three most commonly used networks for benchmarking vertex classification models. Vertices in these graphs represent papers, and each of them has a bag-of-word vector indicating the content of the paper. Edges are citations between papers. Originally these edges are directed, but they are converted to undirected edges in the trade-off between information loss and efficiency of methods.
WebKB. WebKB dataset is a collection of university websites collected by CMU5. As we mentioned in previous sections, this dataset is special because it contains many different types of vertices that have mixed frequencies. We use the Wisconsin, Cornel, Texas subsets of this dataset.
Wikipedia. The Chameleon dataset belongs to a collection of Wikipedia pages where edges are references and vertex labels indicate the internet traffic. Originally this dataset was created for the vertex regression task, but here, we follow Pei et al. (2020) to split the traffic amount into 5 categories.
The synthetic dataset is generated using NetworkX library and labeled by its bipartite parts. The features are generated randomly with Gaussian N (0, 1).
B.4 OTHER METHODS
Other methods are obtained from their respective repository on GitHub. The following are parameter settings for the “Our experiment” section of Table 2 and 3. Since each dataset has a different hyperparameter values, we follow the author’s recommendation for the hyper-parameter not mentioned here. We confirm the results with the recommended hyper-parameters and report them in the “Literature” sections.
5http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb
• GCNII: θ = 0.5, α = 0.5. • SGC: k = 2, lr = 0.01, wd = 5× 10−4, dropout = 0.7. • APPNP: K = 2, α = 0.2 and 0.8, lr = 0.02, wd = 5× 10−4, dropout = 0.7. • SGF-Cheby (our implementaion): λmax = {1.5, 2.0}, K = 16 and other hyper-parameters
are the same as SGF. | 1. What is the main contribution of the paper regarding graph neural networks?
2. What are the concerns regarding the proposed polynomial approximation and its comparison with previous works?
3. What are the issues with the experimental setup and hyperparameter selection?
4. How does the reviewer assess the superiority of the proposed method over other approaches?
5. Are there any questions or suggestions for improving the paper's content or results? | Review | Review
This paper proposes to stack the graph filters with learnable polynomial parameters to construct the new graph neural network model. Generally, this paper is well organized and easy to read. Here are my concerns.
1.Essentially, this paper argues that the approximation of Chebyshev polynomials in GCN can only capture the low-frequency features in the spectral domain, and proposes a more general approximation scheme by stacking the graph filter in the spatial domain. However, the low-frequency property of GCN is highly related to the localized first-order approximation of graph convolutions. Without this first-order approximation, GCN model can capture the high-frequency information in graphs, e.g, ChebyNet [2] with large enough order K. It's better to add more discussions/comparisons with this kind of GCNs.
Moreover, my core concern is the superiority of why the proposed polynomial approximation (in Equation 7) is better than the previous Chebyshev approximation from both theoretical and practical justifications. In graph signal processing, using a polynomial series to approximate the graph filter has been well studied in the literature. As pointed out by [1], Chebyshev polynomial is a good approximator to approximate graph filters. It is better to add more justifications (e.g., numerical analysis) about the proposed approximation scheme.
2.Another concern is the experiment. Dataset splitting: It seems like that this paper adopts the new splitting plan (stratified 0.6/0.2/0.2 splits) for all datasets. Meanwhile, the paper also reports the best results reported in the literature. However, I think it’s improper to put them in the same table since we can’t make a fair comparison under different data splitting. Moreover, I would like to see the results of SGF on the public splitting of these datasets.
Hyperprameters: In Appendix B.4, the authors claim that they follow the hyperparameter recommendation in the original paper of baselines. However, it seems that some of the given hyperparameters are not the best hyper-parameters. For example, for Cora, \alpha of GCNII is set to 0.2, while in Appendix B.4, \alpha=0.5 which inconsistent with the original paper [3]. On the other hand, In Appendix B.2, the authors adopt the random strategy to search the hyperparameters of SGF. Since the authors re-run all the experiments of baselines in the new splits, it’s better to conduct the same hyper-parameter search process for each baseline to ensure a fair comparison.
The filter parameters visualization: From the model construction perspective, since the only difference between SGF and GCNII/APPNP is the trainable filter parameters. Therefore, I’m curious about the value of \alpha and \beta after the training. Could you visualize the value of two parameters in each layer from SGF?
Overall, I think this paper is marginally below the acceptance threshold.
[1] David K. Hammond, Pierre Vandergheynst, and Re ́mi Gribonval. Wavelets on graphs via spectral graph theory. Applied and Computational Harmonic Analysis, 30(2):129–150, 2011. [2] Defferrard, Michaël, Xavier Bresson, and Pierre Vandergheynst. "Convolutional neural networks on graphs with fast localized spectral filtering." Advances in neural information processing systems. 2016. [3] Chen, M., Wei, Z., Huang, Z., Ding, B., & Li, Y. (2020). Simple and deep graph convolutional networks. arXiv preprint arXiv:2007.02133. |
ICLR | Title
Adaptive Stacked Graph Filter
Abstract
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
N/A
We study Graph Convolutional Networks (GCN) from the graph signal processing viewpoint by addressing a difference between learning graph filters with fullyconnected weights versus trainable polynomial coefficients. We find that by stacking graph filters with learnable polynomial parameters, we can build a highly adaptive and robust vertex classification model. Our treatment here relaxes the low-frequency (or equivalently, high homophily) assumptions in existing vertex classification models, resulting a more ubiquitous solution in terms of spectral properties. Empirically, by using only one hyper-parameter setting, our model achieves strong results on most benchmark datasets across the frequency spectrum.
1 INTRODUCTION
The semi-supervised vertex classification problem (Weston et al., 2012; Yang et al., 2016) in attributed graphs has become one of the most fundamental machine learning problems in recent years. This problem is often associated with its most popular recent solution, namely Graph Convolutional Networks (Kipf & Welling, 2017). Since the GCN proposal, there has been a vast amount of research to improve its scalability (Hamilton et al., 2017; Chen et al., 2018; Wu et al., 2019) as well as performance (Liao et al., 2019; Li et al., 2019; Pei et al., 2020).
Existing vertex classification models often (implicitly) assume that the graph has large vertex homophily (Pei et al., 2020), or equivalently, low-frequency property (Li et al., 2019; Wu et al., 2019); see Section 2.1 for graph frequency. However, this assumption is not true in general. For instance, let us take the Wisconsin dataset (Table 1), which captures a network of students, faculty, staff, courses, and projects. These categories naturally exhibit different frequency patterns1. Connections between people are often low-frequency, while connections between topics and projects are often midrange. This problem becomes apparent as GCN-like models show low accuracies on this dataset; for example, see (Pei et al., 2020; Chen et al., 2020b; Liu et al., 2020).
This paper aims at establishing a GCN model for the vertex classification problem (Definition 1) that does not rely on any frequency assumption. Such a model can be applied to ubiquitous datasets without any hyper-parameter tuning for the graph structure.
Contributions. By observing the relation between label frequency and performance of existing GCN-like models, we propose to learn the graph filters coefficients directly rather than learning the MLP part of a GCN-like layer. We use filter stacking to implement a trainable graph filter, which is capable of learning any filter function. Our stacked filter construction with novel learnable filter parameters is easy to implement, sufficiently expressive, and less sensitive to the filters’ degree. By using only one hyper-parameter setting, we show that our model is more adaptive than existing work on a wide range of benchmark datasets.
The rest of our paper is organized as follows. Section 2 introduces notations and analytical tools. Section 3 provides insights into the vertex classification problem and motivations to our model’s design. Section 4 presents an implementation of our model. Section 5 summarizes related literature with a focus on graph filters and state-of-the-art models. Section 6 compares our model and other existing methods empirically. We also provide additional experimental results in Appendix A.
1“Frequency” is an equivalent concept to “homophily” and will be explained in Section 2.
2 PRELIMINARIES
We consider a simple undirected graph G = (V,E), where V = {1, . . . , n} is a set of n vertices and E ⊆ V × V is a set of edges. A graph G is called an attributed graph, denoted by G(X), when it is associated with a vertex feature mapping X : V 7→ Rd, where d is the dimension of the features. We define the following vertex classification problem, also known in the literature as the semi-supervised vertex classification problem (Yang et al., 2016). Definition 1 (Vertex Classification Problem). We are given an attributed graph G(X), a set of training vertices Vtr ⊂ V , training labels Ytr : Vtr → C, and label set C. The task is to find a model h : V → C using the training data (Vtr, Ytr) that approximates the true labeling function Y : V → C.
Let A be the adjacency matrix of the graph G, i.e., Ai,j = 1 if (i, j) ∈ E and 0 otherwise. Let di = ∑ j Aij be the degree of vertex i ∈ V , and let D = diag(d1, . . . , dn) be the n × n diagonal matrix of degrees. Let L = D −A be the combinatorial graph Laplacian. Let L = D−1/2LD−1/2 be the symmetric normalized graph Laplacian. We mainly focus on the symmetric normalized graph Laplacian due to its interesting spectral properties: (1) its eigenvalues range from 0 to 2; and (2) the spectral properties can be compared between different graphs (Chung & Graham, 1997). In recent literature, the normalized adjacency matrix with added self-loops, Ã = I −L+ c, is often used as the propagation matrix, where c is some diagonal matrix.
2.1 GRAPH FREQUENCY
Graph signal processing (Shuman et al., 2012) extends “frequency” concepts in the classical signal processing to graphs using the graph Laplacian. Let L = UΛU> be the eigendecomposition of the Laplacian, where U ∈ Rn×n is the orthogonal matrix consists of the orthonormal eigenvectors of L and Λ is the diagonal matrix of eigenvalues. Then, we can regard each eigenvector uk as a “oscillation pattern” and its eigenvalue λk as the “frequency” of the oscillation. This intuition is supported by the Rayleigh quotient as follows.
r(L, x) , x >Lx x>x =
∑ u∼v Lu,v(x(u)− x(v))2∑
u∈V x(u) 2
. (1)
where ∑ u∼v sums over all unordered pairs for which u and v are adjacent, x(u) denotes the entry of vector x corresponding to vertex u, and Lu,v is the (u, v)-entry of L. From the definition we see that r(x) is non-negative and L is positive semi-definite. r(x) is also known as a variational characterization of eigenvalues of L (Horn & Johnson, 2012, Chapter 4), hence 0 ≤ r(x) ≤ 2 for any non-zero real vector x. We use the notation r(x) to denote the Rayleigh quotient when the normalized graph Laplacian is clear from context. The Rayleigh quotient r(x) measures how the data x is oscillating. Hence, in this study, we use the term “frequency” and the “Rayleigh quotient” interchangeably. By the definition, the eigenvector ui has the frequency of λi.
The labeling y of the vertices is low-frequency if the adjacent vertices are more likely to have the same label. This is a common assumption made by the spectral clustering algorithms (Shi & Malik, 2000; Ng et al., 2002; Shaham et al., 2018). Commonly used terms, homophily and heterophily, used in network science, correspond to low-frequency and high-frequency, respectively.
2.2 GRAPH FILTERING
In classical signal processing, a given signal is processed by filters in order to remove unwanted interference. Here, we first design a frequency response f(λ) of the filter, and then apply the filter to the signal in the sense that each frequency component x̂(λ) of the data is modulated as f(λ)x̂(λ). Graph signal processing extends this concept as follows. Same as in classical signal processing, we design a filter f(λ). Then, we represent a given graph signal x ∈ R|V | as a linear combination of the eigenvectors as x = ∑ i xiui. Then, we modulate each frequency component
by f(λ) as x = ∑ i f(λi)xiui. An important fact is that this can be done without performing the eigendecomposition explicitly. Let f(L) be the matrix function induced from f(λ). Then, the filter is represented by f(L)x. As an extension of signal processing, graph signal processing deals with signals defined on graphs. In definition 1, each column of the feature matrix X ∈ Rn×d is a “graph signal”. Let L = UΛU> be
the eigendecomposition where U ∈ Rn×n consists of orthonormal eigenvectors. Signal X is filtered by function f of the eigenvalues as follow.
X̄ = Uf(Λ)U>X = f(L)X (2)
In general, different implementations of f(L) lead to different graph convolution models. For instance, GCN and SGC (Wu et al., 2019) are implemented by f(L) = (I−L+(D+ I)−1/2L(D+ I)−1/2)k, where the constant term stems from the fact that self-loops are added to vertices and k is the filter order. Generally, the underlying principle is to learn or construct the appropriate filter function f such that it transforms X into a more expressive representation. The filter in GCN is called a low-pass filter because it amplifies low-frequency components (Li et al., 2018; NT & Maehara, 2019).
3 SPECTRAL PROPERTIES OF FILTERS
Towards building a ubiquitous solution, we take an intermediate step to study the vertex classification problem. Similar to the unsupervised clustering problem, an (implicit) low-frequency assumption is commonly made. However, the semi-supervised vertex classification problem is more involved because vertex labels can have complicated non-local patterns. Table 1 shows three groups of datasets, each with different label frequency ranges. Notably, WebKB datasets (Wisconsin, Cornell, Texas) have mixed label frequencies; some labels have low frequencies while others have midrange frequencies. Therefore, in order to relax the frequency assumptions, we need to learn the filtering function f(λ) in a similar way as proposed by Defferrard et al. (2016).
The filtering function f(λ) is often approximated using a polynomial of the graph Laplacian as
f(L) ≈ poly(L) = K∑ i=0 θiLi. (3)
Because polynomials can uniformly approximate any real continuous function on a compact interval (see, e.g., (Brosowski & Deutsch, 1981)), such approximation scheme is well-justified.
Kipf & Welling (2017) derived their GCN formulation as follows. In their equation 5, they approximated a graph filter gθ by Chebyshev polynomials Tk as
gθ ∗ x ≈ K∑ k=0 θkTk(D −1/2AD−1/2)x. (4)
Then, they took the first two terms and shared the parameters as θ0 = −θ1 to obtain their equation 7: gθ ∗ x ≈ θ ( IN +D −1/2AD−1/2 ) x ≈ θ (2IN − L) (5)
Finally, they extended a scalar θ to a matrix Θ to accommodate multiple feature dimensions as
Z = D̃−1/2ÃD̃−1/2XΘ (6)
Kipf & Welling (2017) claimed that the weight matrix Θ can learn different filters, and subsequent works (e.g., (Veličković et al., 2018; Spinelli et al., 2020; Chen et al., 2020b)) also learned filters by Θ. However, neither in theory nor practice it is the case (Oono & Suzuki, 2020). As the construction suggest, a GCN layer only represents a filter of the form f(λ) ≈ 2− λ. To properly learn different graph filters, we should learn the multiplying parameters θ0, θ1, . . . , θK in equation 3. In the next section, we propose a learning model which directly learns these multiplying parameters.
4 MODEL DESCRIPTION
The previous discussion provided several insights: (1) Vertex classification model’s frequency is decided by its filter, (2) a mechanism to match the frequencies of data is necessary, and (3) directly learning the polynomial filter’s coefficients is more desirable if we do not want to make any frequency assumption. Based on these observations, we implemented an adaptive Stacked Graph Filter (SGF) model. Figure 1 visually describes SGF.
Design decisions. The novelty of our model is the stacked filter, and we directly learn the filtering function by filter coefficients α and β, which makes SGF work well universally without frequency hyper-parameters. The deep filter module consists of filters stacked on top of each other with skipconnections to implement the ideas in Proposition 2. Each filter layer has two learnable scalars: α` and β` which control the shape of the linear filter (Figure 1). Two learnable linear layers Win and Wout with a non-linear activation serve as a non-linear classifier (NT & Maehara, 2019).
The input part of our architecture resembles APPNP (Klicpera et al., 2019) in the sense that the input signals (vertex features) are passed through a learning weight, then fed into filtering. The output part of our architecture resembles SGC (Wu et al., 2019) where we learn the vertex labels with filtered signals. This combination naturally takes advantages of both bottom-up (APPNP) and top-down (SGC) approaches. Compared to APPNP and SGC, besides the different in filter learning, our model performs filtering (propagation) on the latent representation and classifies the filtered representation, whereas APPNP propagates the predicted features and SGC classifies the filtered features.
From the spectral filtering viewpoint, our approach is most similar to ChebyNet (Defferrard et al., 2016) since both models aim to learn the filtering polynomial via its coefficients. Chebyshev polynomial basis is often used in signal processing because it provides optimal interpolation points (Cheney, 1966; Hammond et al., 2011). However, since we are learning the coefficients of an unknown polynomial filter, all polynomial bases are equivalent. To demonstrate this point, we implement the Stacked Filter module (Figure 1) using ChebNet’s recursive formula in Section 6. We find that Chebyshev polynomial basis approach has similar performance to the stacked approach with one slight caveat on choosing λmax. We empirically show this problem by setting the scaling factor λmax = 1.5. Note that, as pointed out by Kipf & Welling (2017), such problem can be migrated simply by assuming λmax = 2 so all eigenvalues stay in [−1, 1].
Given an instance of Problem 1, let σ be an activation function (e.g., ReLU), Ã = I − (D + I)−1/2L(D + I)−1/2 be the augmented adjacency matrix, α` and β` be the filter parameters at layer `, a K-layer SGF is given by:
SGF: Input à SGF: Input L
H0 = σ(XWin) H0 = σ(XWin)
H` = α`ÃH`−1 + β`H0, ` = 1 . . .K H` = α`LH`−1 + β`H0, ` = 1 . . .K ŷ = HKWout ŷ = HKWout
SGF can be trained with conventional objectives (e.g., negative log-likelihood) to obtain a solution to Problem 1. We present our models using the augmented adjacency matrix to show its similarity to existing literature. However, as noted in Figure 1, we can replace à with L.
The stacked filter is easy to implement. Moreover, it can learn any polynomial of order-K as follows. The closed-form of the stacked filter (Figure 1) is given by
βKI + K∑ i=1 ( K∏ j=i αj)βi−1LK−i+1 (7)
where β0 = 1. Because each term of equation 7 contains a unique parameter, we obtain the following. Proposition 2. Any polynomial poly(L) of order K can be represented by the form equation 7.
Note that the same result holds if we replace L in equation 7 by Ã. In practice, we typically set the initial values of αi = 0.5 and update them via the back-propagation. The learned αi is then likely to satisfy |αi| < 1, which yields a further property of the stacked filter: it prefers a lowdegree filter, because the coefficients of the higher-order terms are higher-order in αi which vanishes exponentially faster. This advantage is relevant when we compare with a trivial implementation of the polynomial filter that learns θi directly (this approach corresponds to horizontal stacking and ChebyNet (Defferrard et al., 2016)). In Appendix A.1, we compare these two implementations and confirm that the stacked filter is more robust in terms of filter degree than the trivial implementation.
5 RELATED WORK
GCN-like models cover a subset of an increasingly large literature on graph-structured data learning with graph neural networks (Gori et al., 2005; Scarselli et al., 2008). In general, vertex classification and graph classification are the two main benchmark problems. The principles for representation learning behind modern graph learning models can also be split into two views: graph propagation/diffusion and graph signal filtering. In this section, we briefly summarize recent advances in the vertex classification problem with a focus on propagation and filtering methods. For a more comprehensive view, readers can refer to review articles by Wu et al. (2020), Grohe (2020), and also recent workshops on graph representation learning2.
Feature Propagation. Feature propagation/message-passing and graph signal filtering are two equivalent views on graph representation learning (Defferrard et al., 2016; Kipf & Welling, 2017). From the viewpoint of feature propagation (Scarselli et al., 2008; Gilmer et al., 2017), researchers focus on novel ways to propagate and aggregate vertex features to their neighbors. Klicpera et al. (2019) proposed PPNP and APPNP models, which propagate the hidden representation of vertices. More importantly, they pioneered in the decoupling of the graph part (propagation) and the classifier part (prediction). Abu-El-Haija et al. (2019) also proposed to use skip-connections to distinguish between 1-hop and 2-hop neighbors. Zeng et al. (2020) later proposed GraphSAINT to aggregate features from random subgraphs to further improve their model’s expressivity. Pei et al. (2020) proposed a more involved geometric aggregation scheme named Geom-GCN to address weaknesses of GCN-like models. Most notably, they discussed the relation between network homophily and GCN’s performance, which is similar to label frequency r(Y ) in Table 1. Spinelli et al. (2020) introduced an adaptive model named AP-GCN, in which each vertex can learn the number of “hops” to propagate its feature via a trainable halting probability. Similar to our discussion in Section 3, they still use a fully-connected layer to implement the halting criteria, which controls feature propagation. AP-GCN’s architecture resembles horizontal stacking of graph filters where they learn coefficients θ directly. However their construction only allows for binary coefficients3. We later show that full horizontal stacking models (more expressive than AP-GCN) is less stable in terms of polynomial order than our approach (Appendix A.1). More recently, Liu et al. (2020) continued to address the difficulty of low homophily datasets and proposed a non-local aggregation based on 1D convolution and the attention mechanism, which has a “reconnecting” effect to increase homophily.
Graph Filtering. GCN-like models can also be viewed as graph signal filters where vertex feature vectors are signals and graph structure defines graph Fourier bases (Shuman et al., 2012; Defferrard et al., 2016; Li et al., 2018; Wu et al., 2019). This graph signal processing view addresses label efficiency (Li et al., 2019) and provides an analogue for understanding graph signal processing using
2See, e.g., https://grlplus.github.io/ 3In the manuscript, they showed a construction using coefficients of graph Laplacian, but the actual imple-
mentation used GCNConv (which is I − L+ c) from pytorch-geometric.
traditional signal processing techniques. For example, the Lanczos algorithm is applied in learning graph filters by Liao et al. (2019). Bianchi et al. (2019) applies the ARMA filter to graph neural networks. Similar to (Klicpera et al., 2019), Wu et al. (2019) and NT & Maehara (2019) also follow the decoupling principle but in a reversed way (filter-then-classify). (Chen et al., 2020b) built a deep GCN named GCNII which holds the current best results for original splits of Cora, Citeseer, and Pubmed. They further showed that their model can estimate any filter function with an assumption that the fully-connected layers can learn filter coefficients (Chen et al., 2020b, Proof of Theorem 2).
6 EXPERIMENTAL RESULTS
We conduct experiments on benchmark and synthetic data to empirically evaluate our proposed models. First, we compare our models with several existing models in terms of average classification accuracy. Our experimental results show that our single model can perform well across all frequency ranges. Second, we plot the learned filter functions of our model to show that our model can learn the frequency range from the data — such visualization is difficult in existing works as the models’ filters are fixed before the training process.
6.1 DATASETS
We use three groups of datasets corresponding to three types of label frequency (low, midrange, high). The first group is low-frequency labeled data, which consists of citation networks: Cora, Citeseer, Pubmed (Sen et al., 2008); and co-purchase networks Amazon-Photo, Amazon-Computer (Shchur et al., 2018). The second group is network datasets with midrange label frequency (close to 1): Wisconsin, Cornell, Texas (Pei et al., 2020); and Chameleon (Rozemberczki et al., 2019). The last group consists of a synthetic dataset with high label frequency (close to 2). For the Biparite dataset, we generate a connected bipartite graph on 2,000 vertices (1,000 on each part) with an edge density of 0.025. We then use the bipartite parts as binary vertex labels. Table 1 gives an overview of these datasets; see Appendix B.3 for more detail.
6.2 VERTEX CLASSIFICATION
We compare our method with some of the best models in the current literature. Two layers MLP (our model without graph filters), GCN (Kipf & Welling, 2017), SGC (Wu et al., 2019), and APPNP (Klicpera et al., 2019) are used as a baseline. Geom-GCN-(I,P,S) (Pei et al., 2020), JKNet+DE (Xu et al., 2018; Rong et al., 2019), and GCNII (Chen et al., 2020a) are currently among the best models. We implement the Chebyshev polynomial filter as in (Defferrard et al., 2016) and set λmax = 1.5. The Literature section of Table 2 and 3 shows the best results found in the literature where these models are set at the recommended hyper-parameters and recommended variants for each dataset. In our experiment, we fix the graph-related hyper-parameters of each model and report the classification results. Our model contains 16 layers of stacked filters (Ã) and has 64 hidden dimensions. Learning rate is set at 0.01, weight decay is 5e × 10−4, and dropout rate for linear
layers is 0.7. From an intuition that the filter should discover the required frequency pattern before the linear layers, we set the learning rate of linear layers to be one-fourth of the main learning rate. This experimental setup shows that SGF can adapt to the label frequency without setting specific hyper-parameters. In Table 2, SGF performs comparably with the current state-of-the-art. On the other hand, in Table 3, SGF is not only better than others in our experiments but also surpassing the best results in the literature. Note that we also the exact same SGF model across all experiments.
Results in Table 3 also suggest that the ability to adapt of the state of the art model GCNII is sensitive to its parameters α and θ. In our experiment, we fix the θ parameter to 0.5 for all datasets, while in their manuscript the recommended values are around 1.5 depending on the dataset. With the recommended hyper-parameters, GCNII can achieve the average accuracy of 81.57% on Wisconsin data. However, its performance dropped around 3 ∼ 10% with different θ values. This comparison highlights our model’s ability to adapt to a wider range of datasets without any graph-related hyper-parameters.
The Chebyshev polynomial basis performs comparably to the staking implementation as we discussed in the previous sections. The value λmax = 1.5 is choosen because the typical maximum eigenvalue of real-world networks are often at this value. However, in practice, one should set λmax = 2 as
0
3
6 Acc: 87.1
0
6
12 Acc: 89.0 x10
-4
-2
1 Acc: 100 x10
0
2
4 Init.
0
2
4 Init.
Cora Wisconsin Bipartite
0
0.8
1.6 Acc: 100
x10
3
5
7 Acc: 84.4 Acc: 79.2 x10
1.1
2.0
0.2
discussed by Kipf & Welling (2017). Our experiments here intent to highlight the potential numerical instability problem due to the arbitarily large leading coefficient of the Chebyshev polynomial basis. Since for vertex classification any polynomial basis is equivalent, numerical stable ones like our implementation of SGF is certainly more preferable in practice.
6.3 FILTER VISUALIZATION
Another advantage of our model is the ability to visualize the filter function using an inversion of Proposition 2. The first row of Figure 2 shows the filtering functions at initialization and after training when input is the normalized augmented adjacency matrix. The second row shows the results when the input is the normalized Laplacian matrix. These two cases can be interpreted as starting with a low-pass filter (Ã) or starting with a high-pass filter (L). Figure 2 clearly shows that our method can learn the suitable filtering shapes from data regardless of the initialization. We expect the visualization here can be used as an effective exploratory tool and baseline method for future graph data.
6.4 ADAPTIVITY TO STRUCTURAL NOISE
Recently, Fox & Rajamanickam (2019) raised a problem regarding structural robustness of a graph neural network for graph classification. Zügner et al. (2018) posed a similar problem related to adversarial attack on graphs by perturbations of vertex feature or graph structure for the vertex classification setting (Dai et al., 2018; Bojchevski & Günnemann, 2019; Zügner & Günnemann, 2019). Here, we evaluate the robustness of the models against the structural noise, where we perturb a fraction of edges while preserving the degree sequence4. This structural noise collapses the relation between the features and the graph structure; hence, it makes the dataset to have the midrange frequency. This experimental setting shows that adaptive models like ours and GCNII are more robust to structural noise. In the worst-case scenario (90% edges are swapped), the adaptive models are at least as good as an MLP on vertex features. Figure 3 shows vertex classification results at each amount of edge perturbation: from 10% to 90%. APPNP with α = 0.2 and SGC with k = 2 have similar behavior under structural noise since these models give more weights to filtered features. On the other hand, APPNP with α = 0.8 is much more robust to structural noise as it depends more on the vertex features. This result suggests that adaptive models like ours and GCNII can be a good baseline for future graph adversarial attack studies (SGF’s advantage here is being much simpler).
6.5 DYNAMICS OF α’S AND β’S
In addition to Section 6.3, this section studies the dynamic of α and β during training for two representative datasets: Cora (low-frequency) and Wisconsin (mid-frequency). We the value of α and β in SGF (Ã) every 20 training epochs and plot the result. Figure 4 shows the values of α and β in 16 layers of SGF in top to bottom then left to right order (reshaped to 4 by 4 blocks). For the Cora dataset, we see that the over-smoothing effect is quickly migrated as the α’s automatically go to zero with the exception of the last three layers. Similarly, the weights for skip-connections – β’s – quickly
4https://en.wikipedia.org/wiki/Degree-preserving_randomization
go to zero with the exception of few last layers. For the Wisconsin dataset, we can see that there is almost no filtering because all α’s go to zero quickly and there is only one active skip-connection in the last layer. This single active skip-connection phenomenon is further confirmed by the experiment on MLP (Table 3) where MLP performed comparably to graph-based models. These results further explained the ability to adapt of our model.
Additional Experiments. We provide several other experimental results in Appendix A. Section A.1 discusses the advantages of vertical stacking (SGF) versus a naı̈ve horizontal stacking (learning θ in equation 3 directly). Section A.2 discusses the difficulty of estimating the frequency range (Rayleigh quotient) of vertex labels when the training set is small. Section A.3 provide additional experiments where α’s and β’s are initialized randomly. We show that our model is still adaptive even with uniform [−1, 1] initialization.
7 CONCLUSION
We show that simply by learning the polynomial coefficients rather the linear layers in the formulation of GCN can lead to a highly adaptive vertex classification model. Our experiment shows that by using only one setting, SGF is comparable with all current state-of-the-art methods. Furthermore, SGF can also adapt to structural noise extremely well, promising a robust model in practice. Since our objective is to relax the frequency assumption, one could expect our model will perform weakly when number of training data is limited. Because the estimation of label frequency becomes difficult with a small number of data (Appendix A.2), designing a learning model that is both adaptive and data-efficient is an exciting challenge. We believe an unbiased estimation (Proposition 4) with a more involved filter learning scheme is needed to address this problem in the future.
A EXTRA EXPERIMENTAL RESULTS
A.1 VERTICAL AND HORIZONTAL STACKING
Horizontal stacking is equivalent to learning θ’s in Equation 3 directly instead of stacking them vertically. In parallel to our work, Chien et al. (2020) explored the horizontal stacking idea with the pagerank matrix instead of the Laplacian matrix discussed here. We find that both vertical and horizontal can learn degree K polynomial, but vertical stacking naturally robust to the high order terms. Horizontally stacked filter even loses its ability to adapt when learning order 64 polynomials. Table 4 shows a comparison between vertical stacking (SGF) and horizontal stacking. We also report the average number of iteration until early stopping and average training time per epoch for the 64 filters case. All hyper-parameters are the same as in Table 2 and 3. Figure 5 gives an example of 4 layers stacking to clarify the difference between horizontal and vertical.
A.2 RAYLEIGH QUOTIENT ESTIMATION FROM TRAINING DATA
To obtain an accurate classification solution, the frequency of the model’s output must be close to the frequency of the true labels as follows. Proposition 3. Let ŷ, y ∈ RN be unit length vectors whose signs of entries indicate predicted labels and true labels for vertices in graph G. Let L ∈ Rn×n be the symmetric normalized graph Laplacian of graph G. Suppose the graph frequency gap is at least δ: |r(ŷ) − r(y)| = |ŷ>Lŷ − y>Ly| ≥ δ. Then we have:
||ŷ − y||22 ≥ δ/4 (8)
This proposition explains that a model designed for a specific frequency range (e.g., GCN, SGC, GAT, APPNP, etc for low-frequency range) gives a poor performance on the other frequency ranges. This proposition also leads us a method to seek a model (i.e., a filter) whose output matches the frequency of the true labels. Because the true label frequency is unknown in practice, we must estimate this quantity from the training data. Below, we discuss the difficulty of this estimation.
A naı̈ve strategy of estimating the frequency is to compute Rayleigh quotient on the training set. However, training features X and training labels yn often have Rayleigh quotient close to 1 (as shown in Table 1 for r(X)), and Figure 7 (Appendix) shows the results when we compute the Rayleigh quotient of labels based on training data. This means that a naı̈ve strategy yields undesirable results and we need some involved process of estimating the frequency.
If we can assume that (1) Training vertices are sampled i.i.d., and (2) we know the number of vertices in the whole graph (N = |V |), we can obtain an unbiased estimation of the frequency of the true labels as follows.
Proposition 4. Let p be the proportion of vertices will be used as training data, q be the proportion of label y, N be the total number of vertices in the graph, Ln be the symmetric normalized Laplacian of the subgraph induced by the training vertices, and yn be the training labels. Assuming the training set is obtained by sampling the vertices i.i.d. with probability p, we can estimate the Rayleigh quotient of true labels by
E(r(yn)) = 4N−1p−2 ( y>n Lnyn − (1− p)y>n diag(Ln)yn ) (9)
Figure 6 shows an unbiased estimation results using Proposition 4. Unfortunately, at 10% training ratio, the observed variances are high across datasets; thus, we conclude that estimating the label frequency is generally difficult, especially for small training data.
Thus far, we have shown that estimating label’s frequency given limited training data is difficult even with an unbiased estimator. The high data efficiency of GCN-like models could be contributed to the fact that they already assume the labels are low frequency. Without such assumption, we need more data in order to correctly estimate the frequency patterns.
A.3 RANDOM INITIALIZATION
While the main content of our paper showed the results for α and β initialized at 0.5, our results generally hold even if we initialize them randomly. Table 5 demonstrates this claim by showing our model’s performance with α and β initialized randomly. SGF (0.5) is the setting showed in the main part of our paper. SGF (U[-1,1]) initializes α and β using a uniform distribution in [-1,1].
Both Table 5 and Figure 8 show that our model behaves similar to the fixed initialization at 0.5. It is worthwhile to mention that Figure 8a and 8b show SGF initialized randomly at the same seed but converged to two different solutions. The accuracies for these two particular cases are 89.7% for Cora nd 92.0% for Wisconsin. This result and the filter visualization in Section 6.3 refute the argument that our model is also biased toward ”low-frequency”.
SGF (0.5) 88.97 ± 1.21 77.58 ± 1.11 90.12 ± 0.40 95.58 ± 0.55 92.15 ± 0.41 SGF (U[-1,1]) 88.47 ± 1.40 77.50 ± 1.88 88.23 ± 1.12 92.23 ± 0.53 87.15 ± 3.63
Wisconsin Cornell Texas Chameleon Bipartite
SGF (0.5) 87.06 ± 4.66 82.45 ± 6.19 80.56 ± 5.63 58.77 ± 1.90 100.0 ± 0.00 SGF (U[-1,1]) 88.66 ± 3.40 79.13 ± 1.60 79.67 ± 3.62 57.83 ± 2.47 100.0 ± 0.00
B EXPERIMENTAL DETAILS
B.1 SOURCE CODE
The source code is provided in src.zip. The instruction to install Python environment and running examples can be found in README.md. All results in this paper are obtained using a single machine with an RTX Titan GPU (24GB). We also confirm the results on CPU and another machine with a GeForce 1080Ti GPU (11GB). The provided source code works on both CPU and GPU.
B.2 EVALUATION PROCEDURE
For each dataset and each run, the following training procedure is implemented: Split, use train and validation vertices to estimate Rayleigh quotient; train the model with train set and choose the hyper-parameters using validation set, the hyper-parameters are dropout rate, learning rate, and number of layers; save the model every time best validation accuracy is reached; load the best model on validation set to evaluate on test set. Search set for each hyper-parameters:
• Dropout rate: {0.4, 0.5, 0.6, 0.7, 0.8} • Weight decay : {1e− 2, 1e− 3, 5e-4, 1e− 4, 5e− 5} • Learing rate: {0.001, 0.01, 0.02, 0.1} • Number of layers: {4, 8, 16, 32, 64}
We use the hyper-parameters in bold text to report the result in the main part of our paper.
B.3 DATA SOURCE
Our datasets are obtained from the pytorch-geometric repository and the node-classification-dataset repository on GitHub. These datasets are “re-packed” with pickle and stored in src/data. The original URLs are:
• https://github.com/rusty1s/pytorch geometric • https://github.com/ryutamatsuno/node-classification-dataset
Citation Networks. Cora (ML), Citeseer, and Pubmed (Sen et al., 2008) are the set of three most commonly used networks for benchmarking vertex classification models. Vertices in these graphs represent papers, and each of them has a bag-of-word vector indicating the content of the paper. Edges are citations between papers. Originally these edges are directed, but they are converted to undirected edges in the trade-off between information loss and efficiency of methods.
WebKB. WebKB dataset is a collection of university websites collected by CMU5. As we mentioned in previous sections, this dataset is special because it contains many different types of vertices that have mixed frequencies. We use the Wisconsin, Cornel, Texas subsets of this dataset.
Wikipedia. The Chameleon dataset belongs to a collection of Wikipedia pages where edges are references and vertex labels indicate the internet traffic. Originally this dataset was created for the vertex regression task, but here, we follow Pei et al. (2020) to split the traffic amount into 5 categories.
The synthetic dataset is generated using NetworkX library and labeled by its bipartite parts. The features are generated randomly with Gaussian N (0, 1).
B.4 OTHER METHODS
Other methods are obtained from their respective repository on GitHub. The following are parameter settings for the “Our experiment” section of Table 2 and 3. Since each dataset has a different hyperparameter values, we follow the author’s recommendation for the hyper-parameter not mentioned here. We confirm the results with the recommended hyper-parameters and report them in the “Literature” sections.
5http://www.cs.cmu.edu/afs/cs.cmu.edu/project/theo-11/www/wwkb
• GCNII: θ = 0.5, α = 0.5. • SGC: k = 2, lr = 0.01, wd = 5× 10−4, dropout = 0.7. • APPNP: K = 2, α = 0.2 and 0.8, lr = 0.02, wd = 5× 10−4, dropout = 0.7. • SGF-Cheby (our implementaion): λmax = {1.5, 2.0}, K = 16 and other hyper-parameters
are the same as SGF. | 1. What are the strengths and weaknesses of the proposed method in adaptively learning the polynomial graph filter?
2. How does the proposed method differ from previous methods in avoiding over-smoothing when stacking many layers?
3. Why does the random initialization not work for the proposed model, and what is the role of the specific initialization proposed in the paper?
4. How does the performance of the proposed method compare to ChebNet (GCN-Cheby), which also uses a polynomial filter, on heterophilic graphs?
5. Why do the authors claim that the performance of most baseline methods are found in the literature, but the date split used in the original GCN and GAT paper is much sparser than the split proposed by the authors?
6. How can the authors explain the large standard deviation in the improvement of the proposed model, especially on Wisconsin, Cornell, and Texas?
7. Why is SGF worse than not only SGC but also GCNII by a large margin on Chameleon, if SGF can indeed learn the near-optimal polynomial filter?
8. Can the authors provide more details about their experiment on filter visualization and structural noise?
9. Can the authors clarify the ambiguity in their statement regarding the model's ability to adapt to both homophilic and heterophilic graphs without tuning hyperparameters?
10. Can the authors provide more references or citations to support their claims, especially regarding the related work in adaptive learning of polynomial graph filters? | Review | Review
Summary: The authors proposed to learn the polynomial graph filter in their model. It can be viewed as adaptively learning the propagation part of APPNP and follows by a linear transformation (in features). They show the proposed model can perform well on both homophilic and heterophilic graphs.
Pros: 1. The idea of adaptively learn the polynomial filter seems correct and reasonable. 2. Results on filter visualization and structural noise are interesting.
Cons: 1. The proposed methodology is not novel. A very similar idea has been proposed previously. (See Detail comments) 2. Problems of over-smoothing. 3. Results on experiment section (Table 2 and 3) are questionable.
Detail comments:
While the proposed idea of adaptively learning the polynomial graph filter is interesting, it has been proposed previously in not only the GNN literature [2] but also PageRank based methods [3]. Both of them proposed the idea of adaptively learn the polynomial graph filter, or equivalently the generalized PageRank weights. Hence, I do not think the current paper is completely novel. Nevertheless, the proposed methodology seems to be the correct answer for GNN to adapt to both homophilic and heterophilic graphs. One problem of the current proposed method is that why it can avoid over-smoothing when stacking many layers? The authors use a fixed initialization
α
=
0.5
which is the same as APPNP so at least at the very beginning it won’t suffer from over-smoothing. However, it is unclear how will the coefficients behave during and after training. Also, it is not clear how to initialize
β
in the model. Furthermore, if the proposed model can indeed adaptively learn the good polynomial graph filter, why doesn’t the random initialization work? Does that mean the implicit bias of the specific initialization proposed in the paper is necessary? If that is the case, then I do not see why the claim of “adaptive learning” is correct since it is actually sensitive to the initialization.
Beside the methodology and novelty, I also find the experiment section questionable. Firstly, since the main theme of the paper is learning the polynomial filter, the authors should at least compare their method with ChebNet (GCN-Cheby)[5] which also use polynomial filter. Note that in both [4] and [2], they all show that ChebNet can better adapt to heterophilic graphs compare to GCN and GAT.
On the other hand, according to Appendix B.4, the authors use
K
=
2
(propagation step) for APPNP. This is NOT the suggested hyperparameter reported in [1] (
K
=
10
). Note that the authors of [1] even show that if we choose a larger
K
≥
10
, the performance can be slightly improved on Cora, Citeseer and PubMed. In contrast, SGF use
K
=
16
which is not a fair comparison to APPNP. There should be a experiment that compares APPNP with SGF under the same
K
.
Finally, the authors claim the performance of most baseline methods are found in the literature. However, this is also problematic to me. Note that in the original GCN and GAT paper, the date split is much sparse then the
0.6
/
0.2
/
0.2
split proposed by the authors. Also, in the Geom-GCN paper they do test their model on Chameleon in the split
0.6
/
0.2
/
0.2
. Why is it stated as not available? Even if we assume all the problem above can be well explained, the improvement of the proposed model seems not statistically significant. For example, on Wisconsin, Cornell and Texas, although SGF has the highest accuracy in average, the standard deviation is very large. MLP is within 1 standard deviation. Please report the confidence interval to show that the gain of SGF is indeed statistically significant. On the other hand, SGF is worse than not only SGC but also GCNII by a large margin on Chameleon. If SGF can indeed learn the near-optimal polynomial filter, then why this is the case? At last, in the original Geom-GCN paper, they also have the Actor dataset. I think it would be great if the authors can put this result at least in the Appendix.
Besides these weaknesses, I still find the paper well written. Also, the experiment on filter visualization and structural noise are quite interesting. I believe the paper can be greatly improved if all the concerns above can be addressed.
Minor comments:
In page 2, the authors state that the normalized adjacency matrix with added self-loops is
A
~
=
I
−
D
−
1
/
2
A
D
−
1
/
2
+
c
, where
c
is some diagonal matrix. This is incorrect. Note that when we add self-loops, the degree matrix
D
has to changed accordingly. Please see the correct expression in [1] for example.
In page 2, the Rayleigh quotient
r
(
L
,
x
)
is defined with two input arguments but later the authors ignore
L
. While it is clear from the context, the notation is not rigorous.
In page 1 introduction section, the authors mention that the model does not need to tune the hyper-parameters. However, in the same page contribution section, the authors mention that they use one hyper-parameter setting. According to their experiment section, I think what they mean is the previous. It would be great to clarify the ambiguity here.
Reference:
[1] “Predict then Propagate: Graph Neural Networks meet Personalized PageRank,” Klicpera et al., ICLR 2018.
[2] “Adaptive Universal Generalized PageRank Graph Neural Network,” Chien et al., arXiv:2006.07988.
[3] “Adaptive diffusions for scalable learning over graphs,” Berberidis et al., In Mining and Learning with Graphs Workshop @ ACM KDD 2018, pp. 1, 8 2018.
[4] “Generalizing Graph Neural Networks Beyond Homophily,” Zhu et al., NeurIPS 2020. (arXiv:2006.11468)
[5] “Convolutional neural networkson graphs with fast localized spectral filtering,” Defferrard et al., NeurIPS 2016. |
ICLR | Title
Optimizer Amalgamation
Abstract
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to study a new problem named Optimizer Amalgamation: how can we best combine a pool of “teacher” optimizers into a single “student” optimizer that can have stronger problem-specific performance? In this paper, we draw inspiration from the field of ”learning to optimize” to use a learnable amalgamation target. First, we define three differentiable amalgamation mechanisms to amalgamate a pool of analytical optimizers by gradient descent. Then, in order to reduce variance of the amalgamation process, we also explore methods to stabilize the amalgamation process by perturbing the amalgamation target. Finally, we present experiments showing the superiority of our amalgamated optimizer compared to its amalgamated components and learning to optimize baselines, and the efficacy of our variance reducing perturbations. Our code and pre-trained models are publicly available at http://github.com/VITA-Group/OptimizerAmalgamation.
1 INTRODUCTION
Gradient-based optimization is ubiquitous in machine learning; accordingly, a cottage industry of gradient-based optimizer design has emerged (Schmidt et al., 2020). These optimizers generally propose algorithms that aim to make the “best” parameter update for a computed gradient (Kingma & Ba, 2017; Liu et al., 2020), with some also modifying the location where the parameters are computed (Zhang et al., 2019b). However, each gradient-based optimizer claim specific problems where they hold performance advantages, none can claim to be universally superior. Due to the “No Free Lunch” theorem for optimization (Wolpert & Macready, 1997), no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class.
Furthermore, problems such as training neural networks are not homogeneous. In the spatial dimension, different layers or even parameters can have different behavior (Chen et al., 2020b). Also, as evidenced by the popularity of learning rate schedules, neural network optimization also behaves very differently in the temporal dimension as well (Golatkar et al., 2019). This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process.
In order to build a stronger optimizer, we propose the new problem of optimizer amalgamation: how can we best combine a pool of multiple “teacher” optimizers, each of which might be good in certain cases, into a single stronger “student” optimizer that integrates their strengths and offsets their weaknesses? Specifically, we wish for our combined optimizer to be adaptive both per-parameter and per-iteration, and exploit problem-specific knowledge to improve performance on a class of problems.
To “amalgamate” an optimizer from a pool of optimizers, we draw inspiration from recent work in Learning to Optimize (Chen et al., 2021a) which provides a natural way to parameterize and train optimization update rules. In Learning to Optimize, optimizers are treated as policies to be learned from data. These “learned” optimizers are typically parameterized by a recurrent neural network
*Work done while the author was at the University of Texas at Austin.
(Andrychowicz et al., 2016; Lv et al., 2017); then, the optimizer is meta-trained to minimize the loss of training problems, or “optimizees”, by gradient descent using truncated back-propagation through time. Yet to our best knowledge, no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers.
For our proposed formulation of optimizer amalgamation, we treat the learned optimizer as the amalgamation target. Then, we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer, and present several amalgamation schemes. Finally, we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers. Our contributions are outlined below:
• We formulate the new problem of optimizer amalgamation, which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer. We present three schemes of optimizer amalgamation: additive amalgamation, min-max amalgamation, and imitation of a trained choice.
• We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates. To mitigate this problem, we explore ways to reduce amalgamation variance by improving smoothness of the parameter space. We propose smoothing both by random noise or adversarial noise.
• We present experiments showing extensive and consistent results that validate the effectiveness of our proposal. Specifically, we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance. We also show that our amalgamation method performs significantly better than previous methods on all problems, with few exceptions.
2 RELATED WORKS
Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by (Bucilua et al., 2006), which used it for model compression in order train neural networks (“students”) to imitate the output of more complex models (“teachers”). Knowledge distillation was later formalized by (Hinton et al., 2015), who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result.
The success of knowledge distillation spurred significant efforts to explain its effectiveness. Notably, Chen et al. (2020c); Yuan et al. (2020) discovered that trained distillation teachers could be replaced by hand-crafted distributions. (Yuan et al., 2020) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing, and (Ma et al.; Chen et al., 2021b) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces, which has been demonstrated to help adversarial training Cohen et al. (2019); Lecuyer et al. (2019) and the training of sparse neural networks Ma et al..
The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation. For example, (Romero et al., 2015; Wang et al., 2018; Shen et al., 2018; 2019b; Ye et al., 2020b) propose using intermediate feature representations as distillation targets instead of just network outputs, and (Tarvainen & Valpola, 2017; Yang et al., 2018; Zhang et al., 2019a) unify student and teacher network training to reduce computational costs. Knowledge distillation has also been extended to distilling multiple teachers, which is termed Knowledge Amalgamation (Shen et al., 2019a; Luo et al., 2019; Ye et al., 2019; 2020a).
Although using output logits from pre-trained networks has been extensively explored in knowledge distillation, we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “learned” optimizers, hence the name “optimizer amalgamation”. Not only this is a new topic never studied by existing knowledge distillation literature, but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers.
Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems, or optimizees. The concept was first introduced by (Andrychowicz et al., 2016), who used a Long Short-Term Memory (LSTM) based model in order to parameterize gradient-based optimizers. This model took the loss gradient as its input and output a learned update rule which was then trained by
gradient descent using truncated backpropagation through time. (Andrychowicz et al., 2016) also established a coordinate-wise design pattern, where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures.
Building on this architecture, Wichrowska et al. (2017) and Lv et al. (2017) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs. Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation (Lv et al., 2017), curriculum learning and imitation learning (Chen et al., 2020a), and Jacobian regularization (Li et al., 2020). Notably, Chen et al. (2020a) also proposed a method of imitation learning, which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization.
Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks (You et al., 2020), domain generalization (Chen et al., 2020b), noisy label training (Chen et al., 2020c), adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020), and mixmax optimization (Shen et al., 2021). Moving away from gradient-based optimization, black-box optimization has also been explored (Chen et al., 2017; Cao et al., 2019). For a comprehensive survey with benchmarks, readers may refer to Chen et al. (2021a).
Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise, such as the stochastic gradient noise Devolder et al. (2011); Gorbunov et al. (2020); Simsekli et al. (2019) which is often highly non-Gaussian and heavy-tail in practice; the random initialization and (often non-optimal) hyperparameter configuration; the different local minimum reached each time in non-convex optimization Jain & Kar (2017); and the limited numerical precision in implementations De Sa et al. (2017). The seen and unseen optimizees also constitute domain shifts in our case. In order for a consistent and reliable amalgamation process, the training needs to incorporate resistance to certain perturbations of the optimization process.
We draw inspiration from deep learning defense against various random or malicious perturbations. For example, stability training Zheng et al. (2016) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs. Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs (Szegedy et al., 2013; Goodfellow et al., 2014). For that purpose, random smoothening (Lecuyer et al., 2019; Cohen et al., 2019) and adversarial training (Madry et al., 2017) have been found to increase model robustness with regard to random corruptions or worst-case perturbations; as well as against testing-time domain shifts Ganin et al. (2016). Recent work (He et al., 2019; Wu et al., 2020) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape, forming a double-perturbation mechanism for both inputs and weights.
Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from, and forms the broader AutoML problem Hutter et al. (2018) together with model selection algorithms. Our approach falls under meta-learning, which also includes learned initialization approaches such as MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018). Other optimizer selection and tuning methods include hypergradient descent (Baydin et al., 2017) and bayesian hyperparameter optimization (Snoek et al., 2012). Similar to our knowledge amalgamation approach, algorithm portfolio methods (where many algorithms are available, and some subset is selected) have also been applied to several problem domains such as Linear Programming (Leyton-Brown et al., 2003) and SAT solvers (Xu et al., 2008).
3 OPTIMIZER AMALGAMATION
3.1 MOTIVATION
Optimizer selection and hyperparameter optimization is a difficult task even for experts. With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data (Schmidt et al., 2020), most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “good enough” following some rule of thumb.
As a consequence of the No Free Lunch theorem (Wolpert & Macready, 1997), the best optimizer to use for each problem, weight tensor within each problem, or each parameter may be different.
In practice, different layers within a given neural network can benefit from differently tuned hyperparameters, for example by meta-tuning learning rates by layer (Chen et al., 2020b).
Accordingly, we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually. Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al. (2016); Lv et al. (2017), recurrent neural networks with hierarchical architectures Wichrowska et al. (2017); Metz et al. (2019), and symbolically in terms of predefined blocks Bello et al. (2017). Due to its high expressiveness and relative ease of training, we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al. (2017) as our amalgamation target; more details about this architecture can be found in Appendix C.1.
3.2 THE BASIC DISTILLATION MECHANISM
Knowledge distillation can be viewed as regularizing the training loss with a distillation loss that measures the distance between teacher and student predictions (Hinton et al., 2015). In order to distill a pool of teacher optimizers T = T1, T2, . . . Tk into our target policy P by truncated backpropagation (Appendix A: Algorithm 1), we start by defining a training loss and amalgamation loss. Meta Loss In the context of training optimizers, the training loss is described by the meta loss, which is a function of the optimizee problem loss at each step (Andrychowicz et al., 2016). Suppose we are training a policy P with parameters φ on a problemM : X → R whose output is a loss for each point in data domain X . During each iteration during truncated backpropagation through time, P is used to compute parameter updates forM to obtain a trajectory of optimizee parameters θ1, θ2, . . . θN where for the ith data batch xi and parameters θi at step i, i.e. θi+1 = θi − P (∇θiM(xi, θi)).
For some weighting function f1, f2, . . . fN , the meta loss is Lmeta(x, θi;φ) = ∑N i=1 fi(M(xi, θi)); specifically, we will use the scaled log meta loss fi(m) = log(m)− log(M(xi, θ0)), which can be interpreted as the “mean log improvement.” Distillation Loss The distillation loss in knowledge distillation measures the distance between teacher predictions and student predictions. In training optimizers, this corresponds to the distance between the optimization trajectories generated by the teacher and student. Suppose we have optimizee parameter trajectories θi = (θ (P ) i , θ (T ) i ) generated by the teacher and student, respectively. Then, our distillation loss LT for teacher T is given by the l2 log-loss:
LT (x,θi;φ) = 1
N N∑ i=1 log ∥∥∥θ(P )i − θ(T )i ∥∥∥2 2 . (1)
While knowledge distillation generally refers to imitating a model and imitation learning imitating a policy, the optimizer in our case can be regarded as both a model and a policy. As such, our loss function is similar to the imitation loss mechanism used by Chen et al. (2020a), which can be thought of as a special case of optimizer amalgamation where only a single teacher is used.
3.3 AMALGAMATION OF MULTIPLE TEACHER OPTIMIZERS: THREE SCHEMES
Now, what if there are multiple teachers that we wish to amalgamate into a single policy? How to best combine different knowledge sources is a non-trivial question. We propose three mechanisms:
(1) Mean Amalgamation: adding distillation loss terms for each of the optimizers with constant equal weights.
(2) Min-max Amalgamation: using a min-max approach to combine loss terms for each of the optimizers, i.e., “the winner (worst) takes all”.
(3) Optimal Choice Amalgamation: First training an intermediate policy to choose the best optimizer to apply at each step, then distilling from that “choice optimizer”.
Mean Amalgamation In order to amalgamate our pool of teachers T = T1, . . . T|T |, we generate |T |+ 1 trajectories θi = (θ(P )i , θ (T1) i . . . θ (T|T |) i ) and add distillation losses for each teacher:
Lmean(x;θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 1 |T | |T |∑ i=1 log ∥∥∥θ(P )i − θ(Ti)i ∥∥∥ 2 . (2)
If we view knowledge distillation as a regularizer which provides soft targets during training, mean amalgamation is the logical extension of this by simply adding multiple regularizers to training.
An interesting observation is: when multiple teachers diverge, mean amalgamation loss tends to encourage the optimizer to choose one of the teachers to follow, potentially discarding the influence of all other teachers. This may occur if one teacher is moving faster than another in the optimizee space, or if the teachers diverge in the direction of two different minima. As this choice is a local minimum with respect to the mean log amalgamation loss, the optimizer may “stick” to that teacher, even if it is not the best choice.
Min-Max Amalgamation In order to address this stickiness, we propose a second method: minmax amalgamation, where, distillation losses are instead combined by taking the maximum distillation loss among all terms at each time step:
Lmin-max(x; θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 max T∈T log ∥∥∥θ(P )i − θ(T )i ∥∥∥ 2 . (3)
This results in a v-shaped loss landscape which encourages the amalgamation target to be between the trajectories generated by the teacher pool and prevents the optimizer from “sticking” to one of the teachers.
One weakness shared by both mean and min-max amalgamation is memory usage. Both require complete training trajectories for each teacher in the pool to be stored in memory, resulting in memory usage proportional to the number of teachers, which limits the number of teachers that we could amalgamate from in one pool.
Min-max amalgamation also does not fully solve the problem of diverging teachers. While minmax amalgamation does ensure that no teacher is ignored, it pushes the amalgamation target to the midpoint between the optimizee weights of the two teachers, which does not necessarily correspond to a good optimizee loss. In fact, when teachers diverge into multiple local minima, any solution which considers all teachers must necessarily push the learned optimizer against the gradient, while any solution which allows the learned optimizer to pick one side must discard a number of teachers.
Optimal Choice Amalgamation To fully unlock the power of knowledge amalgamation, we propose to solve the teacher divergence problem by first training an intermediate amalgamation target. By using only one teacher for a final distillation step, we remove the possibility of multiple teachers diverging while also allowing us to use more teachers without a memory penalty.
For optimizer pool T , we define an choice optimizer C which produces choices c1, c2, . . . cN of which optimizer in the pool to apply at each time step, producing updates θ(C)i+1 = θ (C) i − Tci(gi). The objective of the choice optimizer is to minimize the meta loss Lmeta(C;x) with respect to these choices c1:N . We parameterize the choice function C as a small two-layer LSTM, and train it by gradient descent. The LSTM takes the outputs of each optimizer in the pool, the layer type, and time step as inputs; more details are provided in Appendix C.1. To make it easier to train C by truncated back-propagation through time, we relax the choices c1:N to instead be soft choices ci ∈ R|T | : ci ≥ 0, ||ci||1 = 1, resulting in the policy θ(C)i+1 = θ (C) i − ∑|T | j=1 c (j) i Tj(gi). Now, we use C as a teacher to produce our final loss:
Lchoice = Lmeta(φ;x) + α 1
N n∑ i=1 log ∥∥∥θ(P )i − θ(C)i ∥∥∥ 2 . (4)
4 STABILITY-AWARE OPTIMIZER AMALGAMATION
4.1 MOTIVATION
Modern optimization, even analytical, is subject to various forms of noise. For example, stochastic first-order method are accompanied with gradient noise (Devolder et al., 2011; Gorbunov et al., 2020; Simsekli et al., 2019) which is often highly non-Gaussian and heavy-tail in practice. Any non-convex optimization could reach different local minimum when solving multiple times (Jain &
Kar, 2017). When training deep neural networks, thousands or even millions of optimization steps are typically run, and the final outcome can be impacted by the random initialization, (often non-optimal) hyperparameter configuration, and even hardware precision (De Sa et al., 2017). Hence, it is highly desirable for optimizers to be stable: across different problem instances, between multiple training runs for the same problem, and throughout each training run (Lv et al., 2017).
Meta-training optimizers tends to be unstable. During the amalgamation process, we encounter significant variance where identically trained replicates achieve varying performance on our evaluation problems; this mirrors problems with meta-stability encountered by Metz et al. (2019). While amalgamation variance can be mitigated in small-scale experiments by amalgamating many times and using the best one, that variance represents a significant obstacle to large-scale training (i.e. on many and larger problems) and deployment of amalgamated optimizers. Thus, besides the aforementioned optimization stability issues, we also need to consider meta-stability, denoting the relative performance of optimizers across meta-training replicates.
In order to provide additional stability to the amalgamation process, we turn to adding noise during training, which is known to improve smoothness (Chen & Hsieh, 2020; Lecuyer et al., 2019; Cohen et al., 2019) and in turn improve stability (Miyato et al., 2018). Note that one can inject either random noise or adversarial perturbations onto either the input or the weight of the learnable optimizer. While perturbing inputs is more common, recent work (Wu et al., 2020) identified that a flatter weight loss landscape (loss change with respect to weight) leads to smaller robust generalization gap in adversarial training, thanks to its more “global” worst-case view.
We also discover in our experiments that perturbing inputs would make the meta-training hard to converge, presumably because the inputs to optimizers (gradients, etc.) already contain large amounts of batch noise and do not tolerate further corruption. We hence focus on perturbing optimizer weights for smoothness and stability.
4.2 WEIGHT SPACE PERTURBATION FOR SMOOTHNESS
Weight space smoothing produces a noised estimate of the loss L̃ by adding noise to the optimizer parameters φ. By replacing the loss L(φ,x) with a noisy loss L̃ = L(φ̃,x), we encourage the optimizer to be robust to perturbations of its weights, increasing the meta-stability. We explore two mechanisms to increase weight space smoothness during training, by adding (1) a random perturbation to the weights as a gradient estimator, and (2) an adversarial perturbation in the form of a projected gradient descent attack (PGD).
Though new to our application, these two mechanisms have been adopted for other problems where smoothness is important such as neural architecture search (Chen & Hsieh, 2020) and adversarial robustness (Lecuyer et al., 2019; Cohen et al., 2019).
Random Gaussian Perturbation In the first type of noise, we add gaussian noise with variance σ2 to each parameter of the optimizer at each iteration, φ̃ = φ+N (0, σ2I). Since optimizer weights tend to vary largely in magnitude especially between different weight tensors, we modify this gaussian noise to be adaptive to the magnitude of the l2 norm of each weight tensor φ̃(w). For tensor size |φ(w)|, the added noise is given by
φ̃(w) = φ(w) +N ( 0, σ2
||φ(w)||22 |φ(w)| I
) . (5)
Projected Gradient Descent For the second type of noise, we use adversarial noise obtained by projected gradient descent (Appendix A, Algorithm 2). For A adversarial steps, the noised parameters are given by φ̃ = φ+ ψA, where ψ0 = 0, and ψi+1 = ψi + η clipε(∇ψ̃iL) for optimizer loss L.
As with random Gaussian perturbations, we also modify the adversarial perturbation to be adaptive with magnitude proportional to the l2 norm of each weight tensor φ. Here, the adversarial attack step for weight tensor w is instead given by
ψ (w) i+1 = ψ (w) i + ε||φ||2∇ψ(w)i L ||∇
ψ (w) i L||2
. (6)
5 EXPERIMENTS
Optimizee Details All optimizers were amalgamated using a 2-layer convolutional neural network (CNN) on the MNIST (LeCun & Cortes, 2010) dataset (shortened as “Train”) using a batch size of 128. During evaluation, we test the generalization of the amalgamated optimizer to other problems: ¶ Different Datasets: FMNIST (Xiao et al., 2017) and SVHN (Netzer et al., 2011). We also run experiments on CIFAR (Krizhevsky et al., 2009); since the Train network is too small to obtain reasonable performance on CIFAR, we substitute it for the Wider architecture and a 28-layer ResNet (He et al., 2015) labelled “CIFAR” and “ResNet” respectively. · Different Architectures: a 2-layer MLP (MLP), a CNN with twice the number of units in each layer (Wider), and a deeper CNN (Deeper) with 5 convolutional layers. ¸ Training settings: training with a smaller batch size of 32 (Small Batch). We also try a new setting of training with differential privacy (Abadi et al., 2016) (MNIST-DP). Appendix B provides full architecture and training specifications.
Optimizer Pool We use two different optimizer pools in our experiment: “small,” which consists of Adam and RMSProp, and “large,” which also contains SGD, Momentum, AddSign, and PowerSign. Each optimizer has a learning rate tuned by grid search over a grid of {5 × 10−4, 1 × 10−3, 2 × 10−3, . . . 1}. The selection criteria is the best validation loss after 5 epochs for the Train network on MNIST, which matches the meta-training settings of the amalgamated optimizer. Appendix C.2 describes the optimizers used and other hyperparameters.
Baselines First, we compare our amalgamated optimizer against our analytical optimizer teachers which are combined into a “oracle optimizer,” which is the optimizer in our pool of teachers with the best validation loss. We also compare against the optimal choice optimizer used in amalgamation, which functions like a per-iteration trained approximation of the oracle optimizer. Then, we evaluate previous learned optimizer methods: the original “Learning to Learn by Gradient Descent by Gradient Descent” optimizer Andrychowicz et al. (2016) which we refer to as “Original”, RNNProp (Lv et al., 2017), a hierarchical architecture presented by “Learned Optimizers that Scale and Generalize” (Wichrowska et al., 2017) which we refer to as “Scale”, and the best setup from Chen et al. (2020a) which shorten as “Stronger Baselines.”
Training and Evaluation Details The RNNProp amalgamation target was trained using truncated backpropagation though time with a constant truncation length of 100 steps and total unroll of up to 1000 steps and meta-optimized by Adam with a learning rate of 1 × 10−3. For our training process, we also apply random scaling (Lv et al., 2017) and curriculum learning (Chen et al., 2020a); more details about amalgamation training are provided in Appendix C.3. Amalgamation takes up to 6.35 hours for optimal choice amalgamation using the large pool and up to 10.53 hours when using adversarial perturbations; a full report of training times is provided in Appendix C.4.
For each optimizer amalgamation configuration tested, we independently trained 8 replicate optimizers. Then, each replicate was evaluated 10 times on each evaluation problem, and trained to a depth of 25 epochs each time. Finally, we measure the stability of amalgamated optimizers by defining three notions of stability for meta-trained optimizers: ¶ Optimization stability: the stability of the optimizee during the optimization process. Viewing stability of the validation loss as a proxy for model stability with respect to the true data distribution, we measure the epoch-to-epoch variance of the validation loss after subtracting a smoothed validation loss curve (using a Gaussian filter). · Evaluation stability: the variance of optimizer performance across multiple evaluations. We find that the evaluation stability is roughly the same for all optimizers (Appendix E.1). · Meta-stability: the stability of the amalgamation process, i.e. the variance of amalgamation replicates after correcting for evaluation variance. Meta-stability and evaluation stability are jointly estimated using a linear mixed effects model. The stability is reported as a standard deviation. More details are in Appendix D.
5.1 OPTIMIZER AMALGAMATION
Amalgamation Methods Figure 1 compares the mean performance of the three amalgamation methods with the small pool and Choice amalgamation with the large pool. Mean and min-max amalgamation were not performed on the large pool due to memory constraints. The amalgamated optimizers using optimal choice amalgamation perform better than Mean and Min-Max amalgamation. The size of the optimizer pool does not appear to have a significant effect in Optimal Choice amalgamation, with small pool and large pool amalgamated optimizers obtaining similar results.
3.50 3.45 3.40 3.35 3.30 Best Log Validation Loss
0.06
0.07
0.08
0.09
0.10
0.11
0.12
Op tim
iza tio
n St
ab ilit
y
Amalgamated Adam RMSProp SGD Momentum AddSign PowerSign
pool (Figure 5). Comparing analytical optimizers, we observe a general inverse relationship between optimization performance and optimization stability: in order to achieve better optimization, an optimizer typically sacrifices some optimization stability in order to move faster through the optimizee weight space. By integrating problem-specific knowledge, the amalgamated optimizer is able to combine the best optimization performance and optimization stability characteristics (Figure 4).
5.2 STABILITY-AWARE OPTIMIZER AMALGAMATION
Input Perturbation While we also tested perturbing the inputs of the optimizer during amalgamation, we were unable to improve stability. These experiments are included in Appendix E.4.
Random Perturbation Min-max amalgamation was trained on the small optimizer pool with random perturbation relative magnitudes of ε = {5 × 10−4, 10−3, 2 × 10−3, 5 × 10−3, 10−2}. ε = 10−1 was also tested, but all replicates tested diverged and are not reported here.
Comparing perturbed amalgamation against the nonperturbed baseline (ε = 0), we observe that perturbations increase meta-stability up to about ε = 10−3 (Figure 6). For larger perturbation magnitudes, meta-stability begins to decrease as the perturbation magnitude overwhelms the weight “signal,” eventually causing the training process to completely collapse for larger perturbation values. While the stability with random perturbation ε = 10−2 is better than 10−3, this is likely due to random chance, since we use a small sample size of 8 replicates.
Adversarial Perturbation Since adversarial perturbation is more computationally expensive than random perturbations, min-max amalgamation was tested on a coarser grid of relative magnitudes ε = {10−4, 10−3, 10−2}, and to an adversarial attack depth of 1 step. These results are also reported in Figure 6, with ε = 10−2 omitted since all replicates diverged during training.
From our results, we observe that adversarial perturbations are about as effective as random perturbations. We also observe that the maximum perturbation magnitude that the amalgamation process can tolerate is much smaller for adversarial perturbations compared to random perturbations, likely because adversarial perturbations are much “stronger.” Due to the significantly larger training cost of adversarial perturbations, we recommend random perturbations for future work.
Application to Other Methods Random and Adversarial perturbations can be applied to any gradient-based optimizer meta-training method, including all of our baselines. An experiment applying Gaussian perturbations to the RNNProp baseline can be found in Appendix E.5.
6 CONCLUSION
We define the problem of optimizer amalgamation, which we hope can inspire better and faster optimizers for researchers and practitioners. In this paper, we provide a procedure for optimizer amalgamation, including differentiable optimizer amalgamation mechanisms and amalgamation stability techniques. Then, we evaluate our problem on different datasets, architectures, and training settings to benchmark the strengths and weaknesses of our amalgamated optimizer. In the future, we hope to bring improve the generalizability of amalgamated optimizers to even more distant problems.
A ALGORITHMS
In this section, we provide a detailed description of the key algorithms used in our paper.
Truncated Back-propagation: Algorithm 1 shows truncated back-propagation applied to optimizer amalgamation. For an unrolling length N , N data points (batches, in the case of mini-batch SGD) are sampled, which are split into N/t truncations of length t. Note that this requires N to be divisible by t; in our implementation, we require t and N/t to be specified as integers. For each truncation, the optimizee and teachers are trained for t iterations, and meta-gradients are computed over that truncation and applied.
Adversarial Weight Perturbation: Algorithm 2 shows adversarial perturbations applied to optimizer amalgamation. For each adversarial attack step, meta-gradients are taken with respect to the parameters, and are normalized for each tensor with respect to its tensor norm before being applied as an adversarial perturbation.
Algorithm 1: Distillation by Truncated Back-propagation Inputs :Amalgamation loss La
Policy P with parameters φ Teacher policies T = T1, . . . T|T | OptimizeeM,X , θ0 Unrolling, truncation lengths N , t
Outputs :Updated policy parameters φ Sample N data points x1, . . .xN fromX . θ (P ) 0 = θ (T1) 0 = . . . = θ (T|T |) 0 = θ0 for i = 1, 2, . . . N/t do for j = 1, 2, . . . t do
n = (i− 1)t+ j Update optimizee for P : θ (P ) n+1 ← θ (P ) n −P [ ∇M(xn, θ(P )n ) ] Update optimizees for each teacher: for k = 1, . . . |T | do
θ (Tk) n+1 ← θ (Tk) n − Tk [ ∇M(xn, θ(Tk)n ) ] end
end Compute distillation loss: Li ← La(x[(i−1)t:it],θ[(i−1)t:it];φ) Update φ using∇Li
end
Algorithm 2: Adversarial Weight Perturbation for Truncated Back-propagation Inputs :Truncated back-propagation
parameters La, P , φ, T ,M,X , θ0, N , t Adversarial attack steps A
Outputs :Updated policy parameters φ Sample N data points and initialize
optimizee parameters for i = 1, 2, . . . N/t do
ψ0 ← 0 for a = 1, 2, . . . A do
Compute trajectories θ[(i−1)t:it] for P and T L(a)i ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψa) for each weight tensor w do
γ ← ∇ ψ (w) i L(a)i
||∇ ψ (w) i L(a)i ||2
ψ(w)a ← ψa−1 + ε||φ||2γ. end
end Li ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψA) Update φ using∇Li
end
B OPTIMIZEE DETAILS
Table 1 shows a summary of the training problems used. While all training is performed on a 2-layer CNN on MNIST, we evaluated our optimizer on 4 different datasets described in (B.1) and 5 different architectures (described in B.2). We also experiment with different training settings, which are described in B.3.
B.1 DATASETS
All datasets used are classification datasets, with cross entropy used as the training loss. The MNIST dataset (LeCun & Cortes, 2010) is used during training; the other datasets are, from most to least similar, are:
Sample images from these datasets are shown in Figure 7. All datasets were accessed using TensorFlow Datasets and have a CC-BY 4.0 license.
B.2 ARCHITECTURES
The Train convolutional network (Table 2a) has one convolution layer with 16 3x3 filters and one convolution layer with 32 5x5 filters. Each convolution layer uses ReLU activation, has stride 1x1, and is followed by a max pooling layer with size 2x2. Finally, a fully connected softmax layer is used at the output.
The four architectures evaluated are:
1. MLP: a 2-layer MLP with 20 hidden units and sigmoid activation 2. Wider: a modified version of Train with double width on each layer (Table 2b) 3. Deeper: a deeper network with 5 convolutional layers instead of 2 (Table 2c), again using
ReLU activation and 1x1 stride 4. ResNet: a 28-layer ResNet (He et al., 2015) (Table 2d)
B.3 OPTIMIZEE TRAINING
During training, a batch size of 128 is used except for the Small Batch evaluation, which has a batch size of 32. During training and evaluation, datasets are reshuffled each iteration.
To match the warmup process used in meta-training, warmup is also applied during evaluation. The SGD learning rate is fixed at 0.01, which is a very conservative learning rate which does not optimize quickly, but is largely guaranteed to avoid divergent behavior.
For differentially private training, we implement differentially private SGD (Abadi et al., 2016). In differentially private SGD, gradients are first clipped to a fixed l2 norm ε on a per-sample basis; then, gaussian noise with standard deviation σε where σ > 1 is added to the aggregated batch gradients. In our experiments, we use clipping norm ε = 1.0 and noise ratio σ = 1.1. Both MNIST and KMNIST are used as training sets in order to simulate transfer from a non-private dataset (MNIST) used for meta-training to a private dataset (KMNIST).
C AMALGAMATION DETAILS
C.1 OPTIMIZER ARCHITECTURES
In this section, we provide the exact architecture specifications and hyperparameters of our amalgamated optimizer along with other training details and training time. Our implementation is open source, and can be found here: http://github.com/VITA-Group/OptimizerAmalgamation.
C.1.1 RNNPROP ARCHITECTURE
For our amalgamation target, we use RNNProp architecture described by Lv et al. (2017). For each parameter on each time step, this architecture takes as inputs RMSProp update g/ √ v̂ and Adam update m̂/ √ v̂ using momentum (m̂) decay parameter β1 = 0.9 and variance (v̂) decay parameter β2 = 0.999, matching the values used for our analytical optimizers. These values pass through a 2-layer LSTM with tanh activation, sigmoid recurrent activation, and 20 units per layer. The output of this 2-layer LSTM is passed through a final fully connected layer with tanh activation to produce a scalar final update for each parameter.
C.1.2 CHOICE NETWORK ARCHITECTURE
Our Choice network for Optimal Choice Amalgamation is a modified RNNProp architecture. The update steps for each analytical optimizer are given as inputs to the same 2-layer LSTM used in RNNProp. Additionally, the current time step and tensor number of dimensions are provided, with the number of dimensions being encoded as a one-hot vector.
Then, instead of directly using the output of a fully connected layer as the update, LSTM output passes through a fully connected layer with one output per optimizer in the pool. This fully connected layer has a softmax activation, and is used as weights to combine the analytical optimizer updates.
C.2 OPTIMIZER POOL
We consider six optimizers as teachers in this paper: Adam, RMSProp, SGD, Momentum, AddSign, and PowerSign. These optimizers are summarized in table 3.
Table 3: Optimizer pool update rules; all updates include an additional learning rate hyperparameter.
Optimizer Update Rule SGD g
Momentum m̂ RMSProp g/ √ v̂
Adam m̂/ √ v̂
AddSign g(1 + sign(m̂)sign(g)) PowerSign g exp(sign(m̂)sign(g))
Joining the popular hand-crafted optimizers Adam, RMSProp, SGD, and Momentum, AddSign and PowerSign are two optimizers discovered by neural optimizer search (Bello et al., 2017). These two optimizers share the design principle that update steps should be larger when the momentum and gradient are in agreement:
AddSign ∝ g(1 + sign(m̂)sign(g)) PowerSign ∝ g exp(sign(m̂)sign(g)). (7)
Here, g represents the gradient, m̂ an exponential moving average of the gradient. In order to use AddSign and Powersign as teachers for gradient-based distillation, we modify them to be differentiable by replacing the sign function with a scaled tanh with magnitudes normalized by √ v̂:
sign(m̂)sign(g) ≈ tanh(m̂/ √ v̂)tanh(g/ √ v̂) (8)
By dividing by √ v̂, we provide a consistent magnitude to the tanh function so that sign agreement mechanism is not affected by overall gradient magnitudes.
For all optimizers, the momentum decay parameter is set to β1 = 0.9, the variance decay parameter is set to β2 = 0.999, and the learning rate multiplier is found by grid search on the Train optimizee over a grid of {5× 10−4, 1× 10−3, 2× 10−3, . . . 1}.
C.3 ADDITIONAL TRAINING DETAILS
During amalgamation, we apply a number of techniques from previous Learning to Optimize literature in order to boost training:
• Curriculum Learning: We apply curriculum learning (Chen et al., 2020a) to progressively increase the unrolling steps across a maximum of 4 stages with length 100, 200, 500, and 1000. During curriculum learning, checkpoints are saved and validated every 40 “meta-epochs,” which refers to a single optimizee trajectory trained with truncated back-propagation.
• Random Scaling: We apply random scaling (Lv et al., 2017) to reduce overfitting to the gradient magnitudes of the training problem. This random scaling is only applied to the amalgamation target; amalgamation teachers receive “clean” (unscaled) gradients.
• Warmup: Instead of initializing each training optimizee with random weights, we first apply 100 steps of SGD optimization as a “warmup” to avoid the turbulent initial phase of optimizing neural networks. A SGD learning rate of 0.01 is used during this period, and was chosen to be very conservative on all optimizees tested.
These techniques are also applied to all of our baselines, except for Random Scaling is only applied to baselines using the RNNProp architecture since we find that it harms the performance of other optimizer architectures.
C.4 TRAINING COST
Table 4 provides a summary of the training costs for each amalgamation method and baseline are provided. For optimal choice amalgamation, this includes both training the optimal choice optimizer and amalgamation training. All values are reported as the mean across 8 replicates.
All experiments were run on single nodes with 4x Nvidia 1080ti GPUs, providing us with a metabatch size of 4 simultaneous optimizations. In order to replicate our results, GPUs with at least 11GB of memory are required, though less memory can be used if the truncation length for truncated back-propagation is reduced.
D STABILITY DEFINITIONS
In this section, we provide the mathematical definition and measurement details of meta-stability, evaluation stability, and optimization stability.
D.1 META-STABILITY AND EVALUATION STABILITY
In order to quantify meta-stability and evaluation stability, we first summarize the performance of each evaluation using the best validation loss obtained and the training loss of the last epoch. Then, we model the best validation loss Y (val)ij and final training loss Y (train) ij for replicate i and evaluation j with the linear mixed effect model
Yij = µ+ αi + εij , (9)
where µ is the true mean, αi are IID random variables representing the meta-stability of the amalgamated optimizer, and εij are IID random variables representing the evaluation stability of each replicate. The meta-stability and evaluation stability are then quantified by standard deviations σα and σε.
D.2 OPTIMIZATION STABILITY
To measure optimization stability, we model the validation loss Lij(t) at epoch t for replicate i and evaluation j as
Lij(t) = βij(t) + η (t) ij (10)
for smooth function βij(t) which represents the behavior of the evaluation and random variable η (t) ij which captures the optimization stability; we assume that η(t)ij is IID with respect to t.
In order to estimate ση , we first estimate βij(t) by applying a Gaussian filter with standard deviation σ = 2 (epochs) and filter edge mode “nearest,” and σ̂(ij)η is calculated accordingly. Finally, σ (ij) η is treated as a summary statistic for each evaluation, and the mixed effect model described previously (Equation 9) is fit to obtain a final confidence interval for the mean optimization stability.
E ADDITIONAL RESULTS
E.1 EVALUATION STABILITY
Table 5 summarizes the evaluation stability of analytical and amalgamated optimizers. All optimizers obtain similar evaluation stability, except for cases where an optimizer cannot reliably train the optimizee at all such as Momentum, AddSign, and PowerSign on the Deeper CNN. In these cases, the optimizer consistently learns a constant or random classifier, which results in very low variance and high “stability.”
E.2 LARGER EVALUATIONS
In order to explore the limits of our optimizer, we evaluated the amalgamated optimizer with a 52-layer ResNet (763882 parameters — 40x the train network size), and the same 52-layer ResNet on CIFAR-100 instead of CIFAR-10. These results are compared to CIFAR-10 on a 2-layer network and CIFAR-10 on a 28-layer ResNet using Adam as a single baseline (Figure 8).
While our amalgamated optimizer has significant performance advantages on the shallow CIFAR-10 network in our original evaluations and achieves performance parity in the 28-layer ResNet, the amalgamated optimizer can no longer perform as well as the oracle once we reach 52 layers and change to CIFAR-100.
E.3 ADDITIONAL PLOTS
In this section, we include plots providing alternate versions of Figures 2, 3, and 5 in the main text which had some outliers cropped out in order to improve readability.
E.4 INPUT PERTURBATION
When applying input perturbations, we perturb the inputs to the optimizer, or the optimizee gradients, instead of the optimizer weights:
θi+1 = θi − P (∇θiM(xi, θi) +N (0, σ2I)). (11)
We tested magnitudes σ = 10−1 and σ = 10−2 on a smaller experiment size of 6 replicates using Choice amalgamation on the small pool as a baseline; these results are given in Table 6. Many experiment variants remain relating to input noise such as adding noise proportional to parameter norm or gradient norm or trying smaller magnitudes of noise, and this may be a potential area of future study. However, we believe that input noise is generally not helpful to optimizer amalgamation, and did not study it further.
E.5 BASELINES WITH RANDOM PERTURBATION
Our perturbation methods can be applied to any gradient-based optimizer meta-training method, including all of our baselines. To demonstrate this application, we trained 8 RNNProp replicates with Gaussian perturbations with magnitude 1× 10−4; all other settings were identical to the RNNProp baseline. With perturbations, the RNNProp baseline is significantly improved, though not enough to match the performance of our amalgamation method. | 1. What is the main contribution of the paper regarding optimizer distillation?
2. What are the strengths of the proposed approach, particularly in its novelty and technique?
3. What are the weaknesses of the paper, especially regarding its comparisons and limitations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This work presents an approach to distill several "teacher" optimizer into a "student" optimizer through learning to optimize (L2O). They compared three different approaches for doing such distillation, proposed to use random and adversarial perturbation to help with stability, and compared with the analytical optimizers that it distills from and the L2O baselines and showed superior performance.
Review
Strength:
The idea of distilling from multiple "teacher" optimizer is novel although distilling from one "teacher" optimizer has been proposed before as imitation learning in L2O.
The three mechanisms proposed to distill from several "teachers" are interesting, especially the optimal choice amalgamation, which could shed light some further improvements for L2O.
The use of random and adversarial pertubation on weights to improve robustness is a nice technique to help with the stability that is a big issue in L2O.
Weakness:
The comparison of the amalgamated optimizer with the analytical optimizers are interesting, but the advantage of the analytical optimizers is that they tend to work reasonably well for a wide range of problem or model sizes. Since the amalgamated optimizer is still learned on specific settings, I wonder how far it could generalize and when would such generalization break? For example, if the model size or the training steps is 10X or 100X larger, would the learned optimizer still perform well? These could shed some light on the limitation of current approaches and inspire future works.
The "oracle optimizer" is using the best validation loss among the analytical optimizer's in the pool, but a stronger oracle might be the optimal choice optimizer that uses the trained LSTM to pick which optimizer to use at each step. It would help to add this stronger baseline into the analysis and comparison in the experiments since it can help understand whether the bottleneck is in the optimal choice LSTM or the distillation process.
It might help to add more details about the Optimal Choice Amalgamation to help with reproducibility. For example, "the LSTM takes the outputs of each optimizer in the pool, the layer type, and time step as inputs", I wish there are more descriptions (maybe in the supplementary material) about the details such as how the inputs are encoded and how often are the LSTM updated to help with reproducibility. It would also be very helpful if the code can open sourced.
It would be helpful to add some experiments to justify that it is necessary to distill from multiple optimizers instead of just one since the "small" setting just uses two optimizers anyway and the gain of the proposed approach from baseline might come more from the better stability in training due to perturbation. |
ICLR | Title
Optimizer Amalgamation
Abstract
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to study a new problem named Optimizer Amalgamation: how can we best combine a pool of “teacher” optimizers into a single “student” optimizer that can have stronger problem-specific performance? In this paper, we draw inspiration from the field of ”learning to optimize” to use a learnable amalgamation target. First, we define three differentiable amalgamation mechanisms to amalgamate a pool of analytical optimizers by gradient descent. Then, in order to reduce variance of the amalgamation process, we also explore methods to stabilize the amalgamation process by perturbing the amalgamation target. Finally, we present experiments showing the superiority of our amalgamated optimizer compared to its amalgamated components and learning to optimize baselines, and the efficacy of our variance reducing perturbations. Our code and pre-trained models are publicly available at http://github.com/VITA-Group/OptimizerAmalgamation.
1 INTRODUCTION
Gradient-based optimization is ubiquitous in machine learning; accordingly, a cottage industry of gradient-based optimizer design has emerged (Schmidt et al., 2020). These optimizers generally propose algorithms that aim to make the “best” parameter update for a computed gradient (Kingma & Ba, 2017; Liu et al., 2020), with some also modifying the location where the parameters are computed (Zhang et al., 2019b). However, each gradient-based optimizer claim specific problems where they hold performance advantages, none can claim to be universally superior. Due to the “No Free Lunch” theorem for optimization (Wolpert & Macready, 1997), no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class.
Furthermore, problems such as training neural networks are not homogeneous. In the spatial dimension, different layers or even parameters can have different behavior (Chen et al., 2020b). Also, as evidenced by the popularity of learning rate schedules, neural network optimization also behaves very differently in the temporal dimension as well (Golatkar et al., 2019). This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process.
In order to build a stronger optimizer, we propose the new problem of optimizer amalgamation: how can we best combine a pool of multiple “teacher” optimizers, each of which might be good in certain cases, into a single stronger “student” optimizer that integrates their strengths and offsets their weaknesses? Specifically, we wish for our combined optimizer to be adaptive both per-parameter and per-iteration, and exploit problem-specific knowledge to improve performance on a class of problems.
To “amalgamate” an optimizer from a pool of optimizers, we draw inspiration from recent work in Learning to Optimize (Chen et al., 2021a) which provides a natural way to parameterize and train optimization update rules. In Learning to Optimize, optimizers are treated as policies to be learned from data. These “learned” optimizers are typically parameterized by a recurrent neural network
*Work done while the author was at the University of Texas at Austin.
(Andrychowicz et al., 2016; Lv et al., 2017); then, the optimizer is meta-trained to minimize the loss of training problems, or “optimizees”, by gradient descent using truncated back-propagation through time. Yet to our best knowledge, no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers.
For our proposed formulation of optimizer amalgamation, we treat the learned optimizer as the amalgamation target. Then, we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer, and present several amalgamation schemes. Finally, we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers. Our contributions are outlined below:
• We formulate the new problem of optimizer amalgamation, which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer. We present three schemes of optimizer amalgamation: additive amalgamation, min-max amalgamation, and imitation of a trained choice.
• We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates. To mitigate this problem, we explore ways to reduce amalgamation variance by improving smoothness of the parameter space. We propose smoothing both by random noise or adversarial noise.
• We present experiments showing extensive and consistent results that validate the effectiveness of our proposal. Specifically, we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance. We also show that our amalgamation method performs significantly better than previous methods on all problems, with few exceptions.
2 RELATED WORKS
Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by (Bucilua et al., 2006), which used it for model compression in order train neural networks (“students”) to imitate the output of more complex models (“teachers”). Knowledge distillation was later formalized by (Hinton et al., 2015), who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result.
The success of knowledge distillation spurred significant efforts to explain its effectiveness. Notably, Chen et al. (2020c); Yuan et al. (2020) discovered that trained distillation teachers could be replaced by hand-crafted distributions. (Yuan et al., 2020) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing, and (Ma et al.; Chen et al., 2021b) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces, which has been demonstrated to help adversarial training Cohen et al. (2019); Lecuyer et al. (2019) and the training of sparse neural networks Ma et al..
The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation. For example, (Romero et al., 2015; Wang et al., 2018; Shen et al., 2018; 2019b; Ye et al., 2020b) propose using intermediate feature representations as distillation targets instead of just network outputs, and (Tarvainen & Valpola, 2017; Yang et al., 2018; Zhang et al., 2019a) unify student and teacher network training to reduce computational costs. Knowledge distillation has also been extended to distilling multiple teachers, which is termed Knowledge Amalgamation (Shen et al., 2019a; Luo et al., 2019; Ye et al., 2019; 2020a).
Although using output logits from pre-trained networks has been extensively explored in knowledge distillation, we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “learned” optimizers, hence the name “optimizer amalgamation”. Not only this is a new topic never studied by existing knowledge distillation literature, but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers.
Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems, or optimizees. The concept was first introduced by (Andrychowicz et al., 2016), who used a Long Short-Term Memory (LSTM) based model in order to parameterize gradient-based optimizers. This model took the loss gradient as its input and output a learned update rule which was then trained by
gradient descent using truncated backpropagation through time. (Andrychowicz et al., 2016) also established a coordinate-wise design pattern, where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures.
Building on this architecture, Wichrowska et al. (2017) and Lv et al. (2017) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs. Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation (Lv et al., 2017), curriculum learning and imitation learning (Chen et al., 2020a), and Jacobian regularization (Li et al., 2020). Notably, Chen et al. (2020a) also proposed a method of imitation learning, which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization.
Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks (You et al., 2020), domain generalization (Chen et al., 2020b), noisy label training (Chen et al., 2020c), adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020), and mixmax optimization (Shen et al., 2021). Moving away from gradient-based optimization, black-box optimization has also been explored (Chen et al., 2017; Cao et al., 2019). For a comprehensive survey with benchmarks, readers may refer to Chen et al. (2021a).
Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise, such as the stochastic gradient noise Devolder et al. (2011); Gorbunov et al. (2020); Simsekli et al. (2019) which is often highly non-Gaussian and heavy-tail in practice; the random initialization and (often non-optimal) hyperparameter configuration; the different local minimum reached each time in non-convex optimization Jain & Kar (2017); and the limited numerical precision in implementations De Sa et al. (2017). The seen and unseen optimizees also constitute domain shifts in our case. In order for a consistent and reliable amalgamation process, the training needs to incorporate resistance to certain perturbations of the optimization process.
We draw inspiration from deep learning defense against various random or malicious perturbations. For example, stability training Zheng et al. (2016) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs. Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs (Szegedy et al., 2013; Goodfellow et al., 2014). For that purpose, random smoothening (Lecuyer et al., 2019; Cohen et al., 2019) and adversarial training (Madry et al., 2017) have been found to increase model robustness with regard to random corruptions or worst-case perturbations; as well as against testing-time domain shifts Ganin et al. (2016). Recent work (He et al., 2019; Wu et al., 2020) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape, forming a double-perturbation mechanism for both inputs and weights.
Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from, and forms the broader AutoML problem Hutter et al. (2018) together with model selection algorithms. Our approach falls under meta-learning, which also includes learned initialization approaches such as MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018). Other optimizer selection and tuning methods include hypergradient descent (Baydin et al., 2017) and bayesian hyperparameter optimization (Snoek et al., 2012). Similar to our knowledge amalgamation approach, algorithm portfolio methods (where many algorithms are available, and some subset is selected) have also been applied to several problem domains such as Linear Programming (Leyton-Brown et al., 2003) and SAT solvers (Xu et al., 2008).
3 OPTIMIZER AMALGAMATION
3.1 MOTIVATION
Optimizer selection and hyperparameter optimization is a difficult task even for experts. With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data (Schmidt et al., 2020), most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “good enough” following some rule of thumb.
As a consequence of the No Free Lunch theorem (Wolpert & Macready, 1997), the best optimizer to use for each problem, weight tensor within each problem, or each parameter may be different.
In practice, different layers within a given neural network can benefit from differently tuned hyperparameters, for example by meta-tuning learning rates by layer (Chen et al., 2020b).
Accordingly, we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually. Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al. (2016); Lv et al. (2017), recurrent neural networks with hierarchical architectures Wichrowska et al. (2017); Metz et al. (2019), and symbolically in terms of predefined blocks Bello et al. (2017). Due to its high expressiveness and relative ease of training, we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al. (2017) as our amalgamation target; more details about this architecture can be found in Appendix C.1.
3.2 THE BASIC DISTILLATION MECHANISM
Knowledge distillation can be viewed as regularizing the training loss with a distillation loss that measures the distance between teacher and student predictions (Hinton et al., 2015). In order to distill a pool of teacher optimizers T = T1, T2, . . . Tk into our target policy P by truncated backpropagation (Appendix A: Algorithm 1), we start by defining a training loss and amalgamation loss. Meta Loss In the context of training optimizers, the training loss is described by the meta loss, which is a function of the optimizee problem loss at each step (Andrychowicz et al., 2016). Suppose we are training a policy P with parameters φ on a problemM : X → R whose output is a loss for each point in data domain X . During each iteration during truncated backpropagation through time, P is used to compute parameter updates forM to obtain a trajectory of optimizee parameters θ1, θ2, . . . θN where for the ith data batch xi and parameters θi at step i, i.e. θi+1 = θi − P (∇θiM(xi, θi)).
For some weighting function f1, f2, . . . fN , the meta loss is Lmeta(x, θi;φ) = ∑N i=1 fi(M(xi, θi)); specifically, we will use the scaled log meta loss fi(m) = log(m)− log(M(xi, θ0)), which can be interpreted as the “mean log improvement.” Distillation Loss The distillation loss in knowledge distillation measures the distance between teacher predictions and student predictions. In training optimizers, this corresponds to the distance between the optimization trajectories generated by the teacher and student. Suppose we have optimizee parameter trajectories θi = (θ (P ) i , θ (T ) i ) generated by the teacher and student, respectively. Then, our distillation loss LT for teacher T is given by the l2 log-loss:
LT (x,θi;φ) = 1
N N∑ i=1 log ∥∥∥θ(P )i − θ(T )i ∥∥∥2 2 . (1)
While knowledge distillation generally refers to imitating a model and imitation learning imitating a policy, the optimizer in our case can be regarded as both a model and a policy. As such, our loss function is similar to the imitation loss mechanism used by Chen et al. (2020a), which can be thought of as a special case of optimizer amalgamation where only a single teacher is used.
3.3 AMALGAMATION OF MULTIPLE TEACHER OPTIMIZERS: THREE SCHEMES
Now, what if there are multiple teachers that we wish to amalgamate into a single policy? How to best combine different knowledge sources is a non-trivial question. We propose three mechanisms:
(1) Mean Amalgamation: adding distillation loss terms for each of the optimizers with constant equal weights.
(2) Min-max Amalgamation: using a min-max approach to combine loss terms for each of the optimizers, i.e., “the winner (worst) takes all”.
(3) Optimal Choice Amalgamation: First training an intermediate policy to choose the best optimizer to apply at each step, then distilling from that “choice optimizer”.
Mean Amalgamation In order to amalgamate our pool of teachers T = T1, . . . T|T |, we generate |T |+ 1 trajectories θi = (θ(P )i , θ (T1) i . . . θ (T|T |) i ) and add distillation losses for each teacher:
Lmean(x;θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 1 |T | |T |∑ i=1 log ∥∥∥θ(P )i − θ(Ti)i ∥∥∥ 2 . (2)
If we view knowledge distillation as a regularizer which provides soft targets during training, mean amalgamation is the logical extension of this by simply adding multiple regularizers to training.
An interesting observation is: when multiple teachers diverge, mean amalgamation loss tends to encourage the optimizer to choose one of the teachers to follow, potentially discarding the influence of all other teachers. This may occur if one teacher is moving faster than another in the optimizee space, or if the teachers diverge in the direction of two different minima. As this choice is a local minimum with respect to the mean log amalgamation loss, the optimizer may “stick” to that teacher, even if it is not the best choice.
Min-Max Amalgamation In order to address this stickiness, we propose a second method: minmax amalgamation, where, distillation losses are instead combined by taking the maximum distillation loss among all terms at each time step:
Lmin-max(x; θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 max T∈T log ∥∥∥θ(P )i − θ(T )i ∥∥∥ 2 . (3)
This results in a v-shaped loss landscape which encourages the amalgamation target to be between the trajectories generated by the teacher pool and prevents the optimizer from “sticking” to one of the teachers.
One weakness shared by both mean and min-max amalgamation is memory usage. Both require complete training trajectories for each teacher in the pool to be stored in memory, resulting in memory usage proportional to the number of teachers, which limits the number of teachers that we could amalgamate from in one pool.
Min-max amalgamation also does not fully solve the problem of diverging teachers. While minmax amalgamation does ensure that no teacher is ignored, it pushes the amalgamation target to the midpoint between the optimizee weights of the two teachers, which does not necessarily correspond to a good optimizee loss. In fact, when teachers diverge into multiple local minima, any solution which considers all teachers must necessarily push the learned optimizer against the gradient, while any solution which allows the learned optimizer to pick one side must discard a number of teachers.
Optimal Choice Amalgamation To fully unlock the power of knowledge amalgamation, we propose to solve the teacher divergence problem by first training an intermediate amalgamation target. By using only one teacher for a final distillation step, we remove the possibility of multiple teachers diverging while also allowing us to use more teachers without a memory penalty.
For optimizer pool T , we define an choice optimizer C which produces choices c1, c2, . . . cN of which optimizer in the pool to apply at each time step, producing updates θ(C)i+1 = θ (C) i − Tci(gi). The objective of the choice optimizer is to minimize the meta loss Lmeta(C;x) with respect to these choices c1:N . We parameterize the choice function C as a small two-layer LSTM, and train it by gradient descent. The LSTM takes the outputs of each optimizer in the pool, the layer type, and time step as inputs; more details are provided in Appendix C.1. To make it easier to train C by truncated back-propagation through time, we relax the choices c1:N to instead be soft choices ci ∈ R|T | : ci ≥ 0, ||ci||1 = 1, resulting in the policy θ(C)i+1 = θ (C) i − ∑|T | j=1 c (j) i Tj(gi). Now, we use C as a teacher to produce our final loss:
Lchoice = Lmeta(φ;x) + α 1
N n∑ i=1 log ∥∥∥θ(P )i − θ(C)i ∥∥∥ 2 . (4)
4 STABILITY-AWARE OPTIMIZER AMALGAMATION
4.1 MOTIVATION
Modern optimization, even analytical, is subject to various forms of noise. For example, stochastic first-order method are accompanied with gradient noise (Devolder et al., 2011; Gorbunov et al., 2020; Simsekli et al., 2019) which is often highly non-Gaussian and heavy-tail in practice. Any non-convex optimization could reach different local minimum when solving multiple times (Jain &
Kar, 2017). When training deep neural networks, thousands or even millions of optimization steps are typically run, and the final outcome can be impacted by the random initialization, (often non-optimal) hyperparameter configuration, and even hardware precision (De Sa et al., 2017). Hence, it is highly desirable for optimizers to be stable: across different problem instances, between multiple training runs for the same problem, and throughout each training run (Lv et al., 2017).
Meta-training optimizers tends to be unstable. During the amalgamation process, we encounter significant variance where identically trained replicates achieve varying performance on our evaluation problems; this mirrors problems with meta-stability encountered by Metz et al. (2019). While amalgamation variance can be mitigated in small-scale experiments by amalgamating many times and using the best one, that variance represents a significant obstacle to large-scale training (i.e. on many and larger problems) and deployment of amalgamated optimizers. Thus, besides the aforementioned optimization stability issues, we also need to consider meta-stability, denoting the relative performance of optimizers across meta-training replicates.
In order to provide additional stability to the amalgamation process, we turn to adding noise during training, which is known to improve smoothness (Chen & Hsieh, 2020; Lecuyer et al., 2019; Cohen et al., 2019) and in turn improve stability (Miyato et al., 2018). Note that one can inject either random noise or adversarial perturbations onto either the input or the weight of the learnable optimizer. While perturbing inputs is more common, recent work (Wu et al., 2020) identified that a flatter weight loss landscape (loss change with respect to weight) leads to smaller robust generalization gap in adversarial training, thanks to its more “global” worst-case view.
We also discover in our experiments that perturbing inputs would make the meta-training hard to converge, presumably because the inputs to optimizers (gradients, etc.) already contain large amounts of batch noise and do not tolerate further corruption. We hence focus on perturbing optimizer weights for smoothness and stability.
4.2 WEIGHT SPACE PERTURBATION FOR SMOOTHNESS
Weight space smoothing produces a noised estimate of the loss L̃ by adding noise to the optimizer parameters φ. By replacing the loss L(φ,x) with a noisy loss L̃ = L(φ̃,x), we encourage the optimizer to be robust to perturbations of its weights, increasing the meta-stability. We explore two mechanisms to increase weight space smoothness during training, by adding (1) a random perturbation to the weights as a gradient estimator, and (2) an adversarial perturbation in the form of a projected gradient descent attack (PGD).
Though new to our application, these two mechanisms have been adopted for other problems where smoothness is important such as neural architecture search (Chen & Hsieh, 2020) and adversarial robustness (Lecuyer et al., 2019; Cohen et al., 2019).
Random Gaussian Perturbation In the first type of noise, we add gaussian noise with variance σ2 to each parameter of the optimizer at each iteration, φ̃ = φ+N (0, σ2I). Since optimizer weights tend to vary largely in magnitude especially between different weight tensors, we modify this gaussian noise to be adaptive to the magnitude of the l2 norm of each weight tensor φ̃(w). For tensor size |φ(w)|, the added noise is given by
φ̃(w) = φ(w) +N ( 0, σ2
||φ(w)||22 |φ(w)| I
) . (5)
Projected Gradient Descent For the second type of noise, we use adversarial noise obtained by projected gradient descent (Appendix A, Algorithm 2). For A adversarial steps, the noised parameters are given by φ̃ = φ+ ψA, where ψ0 = 0, and ψi+1 = ψi + η clipε(∇ψ̃iL) for optimizer loss L.
As with random Gaussian perturbations, we also modify the adversarial perturbation to be adaptive with magnitude proportional to the l2 norm of each weight tensor φ. Here, the adversarial attack step for weight tensor w is instead given by
ψ (w) i+1 = ψ (w) i + ε||φ||2∇ψ(w)i L ||∇
ψ (w) i L||2
. (6)
5 EXPERIMENTS
Optimizee Details All optimizers were amalgamated using a 2-layer convolutional neural network (CNN) on the MNIST (LeCun & Cortes, 2010) dataset (shortened as “Train”) using a batch size of 128. During evaluation, we test the generalization of the amalgamated optimizer to other problems: ¶ Different Datasets: FMNIST (Xiao et al., 2017) and SVHN (Netzer et al., 2011). We also run experiments on CIFAR (Krizhevsky et al., 2009); since the Train network is too small to obtain reasonable performance on CIFAR, we substitute it for the Wider architecture and a 28-layer ResNet (He et al., 2015) labelled “CIFAR” and “ResNet” respectively. · Different Architectures: a 2-layer MLP (MLP), a CNN with twice the number of units in each layer (Wider), and a deeper CNN (Deeper) with 5 convolutional layers. ¸ Training settings: training with a smaller batch size of 32 (Small Batch). We also try a new setting of training with differential privacy (Abadi et al., 2016) (MNIST-DP). Appendix B provides full architecture and training specifications.
Optimizer Pool We use two different optimizer pools in our experiment: “small,” which consists of Adam and RMSProp, and “large,” which also contains SGD, Momentum, AddSign, and PowerSign. Each optimizer has a learning rate tuned by grid search over a grid of {5 × 10−4, 1 × 10−3, 2 × 10−3, . . . 1}. The selection criteria is the best validation loss after 5 epochs for the Train network on MNIST, which matches the meta-training settings of the amalgamated optimizer. Appendix C.2 describes the optimizers used and other hyperparameters.
Baselines First, we compare our amalgamated optimizer against our analytical optimizer teachers which are combined into a “oracle optimizer,” which is the optimizer in our pool of teachers with the best validation loss. We also compare against the optimal choice optimizer used in amalgamation, which functions like a per-iteration trained approximation of the oracle optimizer. Then, we evaluate previous learned optimizer methods: the original “Learning to Learn by Gradient Descent by Gradient Descent” optimizer Andrychowicz et al. (2016) which we refer to as “Original”, RNNProp (Lv et al., 2017), a hierarchical architecture presented by “Learned Optimizers that Scale and Generalize” (Wichrowska et al., 2017) which we refer to as “Scale”, and the best setup from Chen et al. (2020a) which shorten as “Stronger Baselines.”
Training and Evaluation Details The RNNProp amalgamation target was trained using truncated backpropagation though time with a constant truncation length of 100 steps and total unroll of up to 1000 steps and meta-optimized by Adam with a learning rate of 1 × 10−3. For our training process, we also apply random scaling (Lv et al., 2017) and curriculum learning (Chen et al., 2020a); more details about amalgamation training are provided in Appendix C.3. Amalgamation takes up to 6.35 hours for optimal choice amalgamation using the large pool and up to 10.53 hours when using adversarial perturbations; a full report of training times is provided in Appendix C.4.
For each optimizer amalgamation configuration tested, we independently trained 8 replicate optimizers. Then, each replicate was evaluated 10 times on each evaluation problem, and trained to a depth of 25 epochs each time. Finally, we measure the stability of amalgamated optimizers by defining three notions of stability for meta-trained optimizers: ¶ Optimization stability: the stability of the optimizee during the optimization process. Viewing stability of the validation loss as a proxy for model stability with respect to the true data distribution, we measure the epoch-to-epoch variance of the validation loss after subtracting a smoothed validation loss curve (using a Gaussian filter). · Evaluation stability: the variance of optimizer performance across multiple evaluations. We find that the evaluation stability is roughly the same for all optimizers (Appendix E.1). · Meta-stability: the stability of the amalgamation process, i.e. the variance of amalgamation replicates after correcting for evaluation variance. Meta-stability and evaluation stability are jointly estimated using a linear mixed effects model. The stability is reported as a standard deviation. More details are in Appendix D.
5.1 OPTIMIZER AMALGAMATION
Amalgamation Methods Figure 1 compares the mean performance of the three amalgamation methods with the small pool and Choice amalgamation with the large pool. Mean and min-max amalgamation were not performed on the large pool due to memory constraints. The amalgamated optimizers using optimal choice amalgamation perform better than Mean and Min-Max amalgamation. The size of the optimizer pool does not appear to have a significant effect in Optimal Choice amalgamation, with small pool and large pool amalgamated optimizers obtaining similar results.
3.50 3.45 3.40 3.35 3.30 Best Log Validation Loss
0.06
0.07
0.08
0.09
0.10
0.11
0.12
Op tim
iza tio
n St
ab ilit
y
Amalgamated Adam RMSProp SGD Momentum AddSign PowerSign
pool (Figure 5). Comparing analytical optimizers, we observe a general inverse relationship between optimization performance and optimization stability: in order to achieve better optimization, an optimizer typically sacrifices some optimization stability in order to move faster through the optimizee weight space. By integrating problem-specific knowledge, the amalgamated optimizer is able to combine the best optimization performance and optimization stability characteristics (Figure 4).
5.2 STABILITY-AWARE OPTIMIZER AMALGAMATION
Input Perturbation While we also tested perturbing the inputs of the optimizer during amalgamation, we were unable to improve stability. These experiments are included in Appendix E.4.
Random Perturbation Min-max amalgamation was trained on the small optimizer pool with random perturbation relative magnitudes of ε = {5 × 10−4, 10−3, 2 × 10−3, 5 × 10−3, 10−2}. ε = 10−1 was also tested, but all replicates tested diverged and are not reported here.
Comparing perturbed amalgamation against the nonperturbed baseline (ε = 0), we observe that perturbations increase meta-stability up to about ε = 10−3 (Figure 6). For larger perturbation magnitudes, meta-stability begins to decrease as the perturbation magnitude overwhelms the weight “signal,” eventually causing the training process to completely collapse for larger perturbation values. While the stability with random perturbation ε = 10−2 is better than 10−3, this is likely due to random chance, since we use a small sample size of 8 replicates.
Adversarial Perturbation Since adversarial perturbation is more computationally expensive than random perturbations, min-max amalgamation was tested on a coarser grid of relative magnitudes ε = {10−4, 10−3, 10−2}, and to an adversarial attack depth of 1 step. These results are also reported in Figure 6, with ε = 10−2 omitted since all replicates diverged during training.
From our results, we observe that adversarial perturbations are about as effective as random perturbations. We also observe that the maximum perturbation magnitude that the amalgamation process can tolerate is much smaller for adversarial perturbations compared to random perturbations, likely because adversarial perturbations are much “stronger.” Due to the significantly larger training cost of adversarial perturbations, we recommend random perturbations for future work.
Application to Other Methods Random and Adversarial perturbations can be applied to any gradient-based optimizer meta-training method, including all of our baselines. An experiment applying Gaussian perturbations to the RNNProp baseline can be found in Appendix E.5.
6 CONCLUSION
We define the problem of optimizer amalgamation, which we hope can inspire better and faster optimizers for researchers and practitioners. In this paper, we provide a procedure for optimizer amalgamation, including differentiable optimizer amalgamation mechanisms and amalgamation stability techniques. Then, we evaluate our problem on different datasets, architectures, and training settings to benchmark the strengths and weaknesses of our amalgamated optimizer. In the future, we hope to bring improve the generalizability of amalgamated optimizers to even more distant problems.
A ALGORITHMS
In this section, we provide a detailed description of the key algorithms used in our paper.
Truncated Back-propagation: Algorithm 1 shows truncated back-propagation applied to optimizer amalgamation. For an unrolling length N , N data points (batches, in the case of mini-batch SGD) are sampled, which are split into N/t truncations of length t. Note that this requires N to be divisible by t; in our implementation, we require t and N/t to be specified as integers. For each truncation, the optimizee and teachers are trained for t iterations, and meta-gradients are computed over that truncation and applied.
Adversarial Weight Perturbation: Algorithm 2 shows adversarial perturbations applied to optimizer amalgamation. For each adversarial attack step, meta-gradients are taken with respect to the parameters, and are normalized for each tensor with respect to its tensor norm before being applied as an adversarial perturbation.
Algorithm 1: Distillation by Truncated Back-propagation Inputs :Amalgamation loss La
Policy P with parameters φ Teacher policies T = T1, . . . T|T | OptimizeeM,X , θ0 Unrolling, truncation lengths N , t
Outputs :Updated policy parameters φ Sample N data points x1, . . .xN fromX . θ (P ) 0 = θ (T1) 0 = . . . = θ (T|T |) 0 = θ0 for i = 1, 2, . . . N/t do for j = 1, 2, . . . t do
n = (i− 1)t+ j Update optimizee for P : θ (P ) n+1 ← θ (P ) n −P [ ∇M(xn, θ(P )n ) ] Update optimizees for each teacher: for k = 1, . . . |T | do
θ (Tk) n+1 ← θ (Tk) n − Tk [ ∇M(xn, θ(Tk)n ) ] end
end Compute distillation loss: Li ← La(x[(i−1)t:it],θ[(i−1)t:it];φ) Update φ using∇Li
end
Algorithm 2: Adversarial Weight Perturbation for Truncated Back-propagation Inputs :Truncated back-propagation
parameters La, P , φ, T ,M,X , θ0, N , t Adversarial attack steps A
Outputs :Updated policy parameters φ Sample N data points and initialize
optimizee parameters for i = 1, 2, . . . N/t do
ψ0 ← 0 for a = 1, 2, . . . A do
Compute trajectories θ[(i−1)t:it] for P and T L(a)i ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψa) for each weight tensor w do
γ ← ∇ ψ (w) i L(a)i
||∇ ψ (w) i L(a)i ||2
ψ(w)a ← ψa−1 + ε||φ||2γ. end
end Li ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψA) Update φ using∇Li
end
B OPTIMIZEE DETAILS
Table 1 shows a summary of the training problems used. While all training is performed on a 2-layer CNN on MNIST, we evaluated our optimizer on 4 different datasets described in (B.1) and 5 different architectures (described in B.2). We also experiment with different training settings, which are described in B.3.
B.1 DATASETS
All datasets used are classification datasets, with cross entropy used as the training loss. The MNIST dataset (LeCun & Cortes, 2010) is used during training; the other datasets are, from most to least similar, are:
Sample images from these datasets are shown in Figure 7. All datasets were accessed using TensorFlow Datasets and have a CC-BY 4.0 license.
B.2 ARCHITECTURES
The Train convolutional network (Table 2a) has one convolution layer with 16 3x3 filters and one convolution layer with 32 5x5 filters. Each convolution layer uses ReLU activation, has stride 1x1, and is followed by a max pooling layer with size 2x2. Finally, a fully connected softmax layer is used at the output.
The four architectures evaluated are:
1. MLP: a 2-layer MLP with 20 hidden units and sigmoid activation 2. Wider: a modified version of Train with double width on each layer (Table 2b) 3. Deeper: a deeper network with 5 convolutional layers instead of 2 (Table 2c), again using
ReLU activation and 1x1 stride 4. ResNet: a 28-layer ResNet (He et al., 2015) (Table 2d)
B.3 OPTIMIZEE TRAINING
During training, a batch size of 128 is used except for the Small Batch evaluation, which has a batch size of 32. During training and evaluation, datasets are reshuffled each iteration.
To match the warmup process used in meta-training, warmup is also applied during evaluation. The SGD learning rate is fixed at 0.01, which is a very conservative learning rate which does not optimize quickly, but is largely guaranteed to avoid divergent behavior.
For differentially private training, we implement differentially private SGD (Abadi et al., 2016). In differentially private SGD, gradients are first clipped to a fixed l2 norm ε on a per-sample basis; then, gaussian noise with standard deviation σε where σ > 1 is added to the aggregated batch gradients. In our experiments, we use clipping norm ε = 1.0 and noise ratio σ = 1.1. Both MNIST and KMNIST are used as training sets in order to simulate transfer from a non-private dataset (MNIST) used for meta-training to a private dataset (KMNIST).
C AMALGAMATION DETAILS
C.1 OPTIMIZER ARCHITECTURES
In this section, we provide the exact architecture specifications and hyperparameters of our amalgamated optimizer along with other training details and training time. Our implementation is open source, and can be found here: http://github.com/VITA-Group/OptimizerAmalgamation.
C.1.1 RNNPROP ARCHITECTURE
For our amalgamation target, we use RNNProp architecture described by Lv et al. (2017). For each parameter on each time step, this architecture takes as inputs RMSProp update g/ √ v̂ and Adam update m̂/ √ v̂ using momentum (m̂) decay parameter β1 = 0.9 and variance (v̂) decay parameter β2 = 0.999, matching the values used for our analytical optimizers. These values pass through a 2-layer LSTM with tanh activation, sigmoid recurrent activation, and 20 units per layer. The output of this 2-layer LSTM is passed through a final fully connected layer with tanh activation to produce a scalar final update for each parameter.
C.1.2 CHOICE NETWORK ARCHITECTURE
Our Choice network for Optimal Choice Amalgamation is a modified RNNProp architecture. The update steps for each analytical optimizer are given as inputs to the same 2-layer LSTM used in RNNProp. Additionally, the current time step and tensor number of dimensions are provided, with the number of dimensions being encoded as a one-hot vector.
Then, instead of directly using the output of a fully connected layer as the update, LSTM output passes through a fully connected layer with one output per optimizer in the pool. This fully connected layer has a softmax activation, and is used as weights to combine the analytical optimizer updates.
C.2 OPTIMIZER POOL
We consider six optimizers as teachers in this paper: Adam, RMSProp, SGD, Momentum, AddSign, and PowerSign. These optimizers are summarized in table 3.
Table 3: Optimizer pool update rules; all updates include an additional learning rate hyperparameter.
Optimizer Update Rule SGD g
Momentum m̂ RMSProp g/ √ v̂
Adam m̂/ √ v̂
AddSign g(1 + sign(m̂)sign(g)) PowerSign g exp(sign(m̂)sign(g))
Joining the popular hand-crafted optimizers Adam, RMSProp, SGD, and Momentum, AddSign and PowerSign are two optimizers discovered by neural optimizer search (Bello et al., 2017). These two optimizers share the design principle that update steps should be larger when the momentum and gradient are in agreement:
AddSign ∝ g(1 + sign(m̂)sign(g)) PowerSign ∝ g exp(sign(m̂)sign(g)). (7)
Here, g represents the gradient, m̂ an exponential moving average of the gradient. In order to use AddSign and Powersign as teachers for gradient-based distillation, we modify them to be differentiable by replacing the sign function with a scaled tanh with magnitudes normalized by √ v̂:
sign(m̂)sign(g) ≈ tanh(m̂/ √ v̂)tanh(g/ √ v̂) (8)
By dividing by √ v̂, we provide a consistent magnitude to the tanh function so that sign agreement mechanism is not affected by overall gradient magnitudes.
For all optimizers, the momentum decay parameter is set to β1 = 0.9, the variance decay parameter is set to β2 = 0.999, and the learning rate multiplier is found by grid search on the Train optimizee over a grid of {5× 10−4, 1× 10−3, 2× 10−3, . . . 1}.
C.3 ADDITIONAL TRAINING DETAILS
During amalgamation, we apply a number of techniques from previous Learning to Optimize literature in order to boost training:
• Curriculum Learning: We apply curriculum learning (Chen et al., 2020a) to progressively increase the unrolling steps across a maximum of 4 stages with length 100, 200, 500, and 1000. During curriculum learning, checkpoints are saved and validated every 40 “meta-epochs,” which refers to a single optimizee trajectory trained with truncated back-propagation.
• Random Scaling: We apply random scaling (Lv et al., 2017) to reduce overfitting to the gradient magnitudes of the training problem. This random scaling is only applied to the amalgamation target; amalgamation teachers receive “clean” (unscaled) gradients.
• Warmup: Instead of initializing each training optimizee with random weights, we first apply 100 steps of SGD optimization as a “warmup” to avoid the turbulent initial phase of optimizing neural networks. A SGD learning rate of 0.01 is used during this period, and was chosen to be very conservative on all optimizees tested.
These techniques are also applied to all of our baselines, except for Random Scaling is only applied to baselines using the RNNProp architecture since we find that it harms the performance of other optimizer architectures.
C.4 TRAINING COST
Table 4 provides a summary of the training costs for each amalgamation method and baseline are provided. For optimal choice amalgamation, this includes both training the optimal choice optimizer and amalgamation training. All values are reported as the mean across 8 replicates.
All experiments were run on single nodes with 4x Nvidia 1080ti GPUs, providing us with a metabatch size of 4 simultaneous optimizations. In order to replicate our results, GPUs with at least 11GB of memory are required, though less memory can be used if the truncation length for truncated back-propagation is reduced.
D STABILITY DEFINITIONS
In this section, we provide the mathematical definition and measurement details of meta-stability, evaluation stability, and optimization stability.
D.1 META-STABILITY AND EVALUATION STABILITY
In order to quantify meta-stability and evaluation stability, we first summarize the performance of each evaluation using the best validation loss obtained and the training loss of the last epoch. Then, we model the best validation loss Y (val)ij and final training loss Y (train) ij for replicate i and evaluation j with the linear mixed effect model
Yij = µ+ αi + εij , (9)
where µ is the true mean, αi are IID random variables representing the meta-stability of the amalgamated optimizer, and εij are IID random variables representing the evaluation stability of each replicate. The meta-stability and evaluation stability are then quantified by standard deviations σα and σε.
D.2 OPTIMIZATION STABILITY
To measure optimization stability, we model the validation loss Lij(t) at epoch t for replicate i and evaluation j as
Lij(t) = βij(t) + η (t) ij (10)
for smooth function βij(t) which represents the behavior of the evaluation and random variable η (t) ij which captures the optimization stability; we assume that η(t)ij is IID with respect to t.
In order to estimate ση , we first estimate βij(t) by applying a Gaussian filter with standard deviation σ = 2 (epochs) and filter edge mode “nearest,” and σ̂(ij)η is calculated accordingly. Finally, σ (ij) η is treated as a summary statistic for each evaluation, and the mixed effect model described previously (Equation 9) is fit to obtain a final confidence interval for the mean optimization stability.
E ADDITIONAL RESULTS
E.1 EVALUATION STABILITY
Table 5 summarizes the evaluation stability of analytical and amalgamated optimizers. All optimizers obtain similar evaluation stability, except for cases where an optimizer cannot reliably train the optimizee at all such as Momentum, AddSign, and PowerSign on the Deeper CNN. In these cases, the optimizer consistently learns a constant or random classifier, which results in very low variance and high “stability.”
E.2 LARGER EVALUATIONS
In order to explore the limits of our optimizer, we evaluated the amalgamated optimizer with a 52-layer ResNet (763882 parameters — 40x the train network size), and the same 52-layer ResNet on CIFAR-100 instead of CIFAR-10. These results are compared to CIFAR-10 on a 2-layer network and CIFAR-10 on a 28-layer ResNet using Adam as a single baseline (Figure 8).
While our amalgamated optimizer has significant performance advantages on the shallow CIFAR-10 network in our original evaluations and achieves performance parity in the 28-layer ResNet, the amalgamated optimizer can no longer perform as well as the oracle once we reach 52 layers and change to CIFAR-100.
E.3 ADDITIONAL PLOTS
In this section, we include plots providing alternate versions of Figures 2, 3, and 5 in the main text which had some outliers cropped out in order to improve readability.
E.4 INPUT PERTURBATION
When applying input perturbations, we perturb the inputs to the optimizer, or the optimizee gradients, instead of the optimizer weights:
θi+1 = θi − P (∇θiM(xi, θi) +N (0, σ2I)). (11)
We tested magnitudes σ = 10−1 and σ = 10−2 on a smaller experiment size of 6 replicates using Choice amalgamation on the small pool as a baseline; these results are given in Table 6. Many experiment variants remain relating to input noise such as adding noise proportional to parameter norm or gradient norm or trying smaller magnitudes of noise, and this may be a potential area of future study. However, we believe that input noise is generally not helpful to optimizer amalgamation, and did not study it further.
E.5 BASELINES WITH RANDOM PERTURBATION
Our perturbation methods can be applied to any gradient-based optimizer meta-training method, including all of our baselines. To demonstrate this application, we trained 8 RNNProp replicates with Gaussian perturbations with magnitude 1× 10−4; all other settings were identical to the RNNProp baseline. With perturbations, the RNNProp baseline is significantly improved, though not enough to match the performance of our amalgamation method. | 1. What is the main contribution of the paper regarding neural network optimizers?
2. What are the strengths of the proposed approach, particularly in terms of performance improvement?
3. What are the weaknesses of the paper, specifically regarding experiment settings and generalizability?
4. Do you have any concerns or suggestions regarding the choice of optimizer pool and its impact on performance?
5. How does the reviewer assess the clarity and quality of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper discusses the problem of selecting a neural network optimizer from a pool of possible optimizers. They propose three variants of a meta-algorithm which combines optimizers from the pool, whereby a differentiable meta-loss is defined on the training loss achieved by selection protocol. They show that their algorithm, equipped with weight space training noise leads to better performance on a variety of problems.
Review
Strengths
The problem is well motivated as a solution to choosing from the large pool of possible optimizers. It naturally follows from prior work on knowledge distillation and amalgamation to instead distill from multiple optimizers.
Empirically, optimizer amalgamation performs better (and sometimes significantly so) than baselines from learning to optimize. It also performs favorably to the best choice of analytic optimizer.
Paper is clearly written and easy to follow.
Weaknesses
It is a bit unclear how well an amalgamated optimizer can generalize to various settings -- do we need to amalgamate for each specific problem?
I have some concerns with the experiments discussed in the questions below.
Overall, the experiments are conducted in relatively small settings. It would be useful to know whether an optimizer learned with amalgamation could be used in a large experiments. Such experiments do not need to be extensive, but would improve the significance of the paper.
Questions
It would be interesting to explore further the choice of optimizer pool and its effects on performance. This can be done by choosing subsets of the six optimizers you consider. SGD and Adam would be especially interesting given their popularity.
In the case of mean amalgamation, do you observe the phenomena of the optimizer sticking to one of the teachers?
Can the weight space perturbations be applied to the baselines to improve their stability?
Minor
reference under knowledge distillation is undefined |
ICLR | Title
Optimizer Amalgamation
Abstract
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to study a new problem named Optimizer Amalgamation: how can we best combine a pool of “teacher” optimizers into a single “student” optimizer that can have stronger problem-specific performance? In this paper, we draw inspiration from the field of ”learning to optimize” to use a learnable amalgamation target. First, we define three differentiable amalgamation mechanisms to amalgamate a pool of analytical optimizers by gradient descent. Then, in order to reduce variance of the amalgamation process, we also explore methods to stabilize the amalgamation process by perturbing the amalgamation target. Finally, we present experiments showing the superiority of our amalgamated optimizer compared to its amalgamated components and learning to optimize baselines, and the efficacy of our variance reducing perturbations. Our code and pre-trained models are publicly available at http://github.com/VITA-Group/OptimizerAmalgamation.
1 INTRODUCTION
Gradient-based optimization is ubiquitous in machine learning; accordingly, a cottage industry of gradient-based optimizer design has emerged (Schmidt et al., 2020). These optimizers generally propose algorithms that aim to make the “best” parameter update for a computed gradient (Kingma & Ba, 2017; Liu et al., 2020), with some also modifying the location where the parameters are computed (Zhang et al., 2019b). However, each gradient-based optimizer claim specific problems where they hold performance advantages, none can claim to be universally superior. Due to the “No Free Lunch” theorem for optimization (Wolpert & Macready, 1997), no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class.
Furthermore, problems such as training neural networks are not homogeneous. In the spatial dimension, different layers or even parameters can have different behavior (Chen et al., 2020b). Also, as evidenced by the popularity of learning rate schedules, neural network optimization also behaves very differently in the temporal dimension as well (Golatkar et al., 2019). This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process.
In order to build a stronger optimizer, we propose the new problem of optimizer amalgamation: how can we best combine a pool of multiple “teacher” optimizers, each of which might be good in certain cases, into a single stronger “student” optimizer that integrates their strengths and offsets their weaknesses? Specifically, we wish for our combined optimizer to be adaptive both per-parameter and per-iteration, and exploit problem-specific knowledge to improve performance on a class of problems.
To “amalgamate” an optimizer from a pool of optimizers, we draw inspiration from recent work in Learning to Optimize (Chen et al., 2021a) which provides a natural way to parameterize and train optimization update rules. In Learning to Optimize, optimizers are treated as policies to be learned from data. These “learned” optimizers are typically parameterized by a recurrent neural network
*Work done while the author was at the University of Texas at Austin.
(Andrychowicz et al., 2016; Lv et al., 2017); then, the optimizer is meta-trained to minimize the loss of training problems, or “optimizees”, by gradient descent using truncated back-propagation through time. Yet to our best knowledge, no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers.
For our proposed formulation of optimizer amalgamation, we treat the learned optimizer as the amalgamation target. Then, we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer, and present several amalgamation schemes. Finally, we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers. Our contributions are outlined below:
• We formulate the new problem of optimizer amalgamation, which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer. We present three schemes of optimizer amalgamation: additive amalgamation, min-max amalgamation, and imitation of a trained choice.
• We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates. To mitigate this problem, we explore ways to reduce amalgamation variance by improving smoothness of the parameter space. We propose smoothing both by random noise or adversarial noise.
• We present experiments showing extensive and consistent results that validate the effectiveness of our proposal. Specifically, we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance. We also show that our amalgamation method performs significantly better than previous methods on all problems, with few exceptions.
2 RELATED WORKS
Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by (Bucilua et al., 2006), which used it for model compression in order train neural networks (“students”) to imitate the output of more complex models (“teachers”). Knowledge distillation was later formalized by (Hinton et al., 2015), who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result.
The success of knowledge distillation spurred significant efforts to explain its effectiveness. Notably, Chen et al. (2020c); Yuan et al. (2020) discovered that trained distillation teachers could be replaced by hand-crafted distributions. (Yuan et al., 2020) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing, and (Ma et al.; Chen et al., 2021b) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces, which has been demonstrated to help adversarial training Cohen et al. (2019); Lecuyer et al. (2019) and the training of sparse neural networks Ma et al..
The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation. For example, (Romero et al., 2015; Wang et al., 2018; Shen et al., 2018; 2019b; Ye et al., 2020b) propose using intermediate feature representations as distillation targets instead of just network outputs, and (Tarvainen & Valpola, 2017; Yang et al., 2018; Zhang et al., 2019a) unify student and teacher network training to reduce computational costs. Knowledge distillation has also been extended to distilling multiple teachers, which is termed Knowledge Amalgamation (Shen et al., 2019a; Luo et al., 2019; Ye et al., 2019; 2020a).
Although using output logits from pre-trained networks has been extensively explored in knowledge distillation, we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “learned” optimizers, hence the name “optimizer amalgamation”. Not only this is a new topic never studied by existing knowledge distillation literature, but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers.
Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems, or optimizees. The concept was first introduced by (Andrychowicz et al., 2016), who used a Long Short-Term Memory (LSTM) based model in order to parameterize gradient-based optimizers. This model took the loss gradient as its input and output a learned update rule which was then trained by
gradient descent using truncated backpropagation through time. (Andrychowicz et al., 2016) also established a coordinate-wise design pattern, where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures.
Building on this architecture, Wichrowska et al. (2017) and Lv et al. (2017) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs. Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation (Lv et al., 2017), curriculum learning and imitation learning (Chen et al., 2020a), and Jacobian regularization (Li et al., 2020). Notably, Chen et al. (2020a) also proposed a method of imitation learning, which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization.
Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks (You et al., 2020), domain generalization (Chen et al., 2020b), noisy label training (Chen et al., 2020c), adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020), and mixmax optimization (Shen et al., 2021). Moving away from gradient-based optimization, black-box optimization has also been explored (Chen et al., 2017; Cao et al., 2019). For a comprehensive survey with benchmarks, readers may refer to Chen et al. (2021a).
Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise, such as the stochastic gradient noise Devolder et al. (2011); Gorbunov et al. (2020); Simsekli et al. (2019) which is often highly non-Gaussian and heavy-tail in practice; the random initialization and (often non-optimal) hyperparameter configuration; the different local minimum reached each time in non-convex optimization Jain & Kar (2017); and the limited numerical precision in implementations De Sa et al. (2017). The seen and unseen optimizees also constitute domain shifts in our case. In order for a consistent and reliable amalgamation process, the training needs to incorporate resistance to certain perturbations of the optimization process.
We draw inspiration from deep learning defense against various random or malicious perturbations. For example, stability training Zheng et al. (2016) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs. Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs (Szegedy et al., 2013; Goodfellow et al., 2014). For that purpose, random smoothening (Lecuyer et al., 2019; Cohen et al., 2019) and adversarial training (Madry et al., 2017) have been found to increase model robustness with regard to random corruptions or worst-case perturbations; as well as against testing-time domain shifts Ganin et al. (2016). Recent work (He et al., 2019; Wu et al., 2020) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape, forming a double-perturbation mechanism for both inputs and weights.
Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from, and forms the broader AutoML problem Hutter et al. (2018) together with model selection algorithms. Our approach falls under meta-learning, which also includes learned initialization approaches such as MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018). Other optimizer selection and tuning methods include hypergradient descent (Baydin et al., 2017) and bayesian hyperparameter optimization (Snoek et al., 2012). Similar to our knowledge amalgamation approach, algorithm portfolio methods (where many algorithms are available, and some subset is selected) have also been applied to several problem domains such as Linear Programming (Leyton-Brown et al., 2003) and SAT solvers (Xu et al., 2008).
3 OPTIMIZER AMALGAMATION
3.1 MOTIVATION
Optimizer selection and hyperparameter optimization is a difficult task even for experts. With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data (Schmidt et al., 2020), most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “good enough” following some rule of thumb.
As a consequence of the No Free Lunch theorem (Wolpert & Macready, 1997), the best optimizer to use for each problem, weight tensor within each problem, or each parameter may be different.
In practice, different layers within a given neural network can benefit from differently tuned hyperparameters, for example by meta-tuning learning rates by layer (Chen et al., 2020b).
Accordingly, we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually. Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al. (2016); Lv et al. (2017), recurrent neural networks with hierarchical architectures Wichrowska et al. (2017); Metz et al. (2019), and symbolically in terms of predefined blocks Bello et al. (2017). Due to its high expressiveness and relative ease of training, we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al. (2017) as our amalgamation target; more details about this architecture can be found in Appendix C.1.
3.2 THE BASIC DISTILLATION MECHANISM
Knowledge distillation can be viewed as regularizing the training loss with a distillation loss that measures the distance between teacher and student predictions (Hinton et al., 2015). In order to distill a pool of teacher optimizers T = T1, T2, . . . Tk into our target policy P by truncated backpropagation (Appendix A: Algorithm 1), we start by defining a training loss and amalgamation loss. Meta Loss In the context of training optimizers, the training loss is described by the meta loss, which is a function of the optimizee problem loss at each step (Andrychowicz et al., 2016). Suppose we are training a policy P with parameters φ on a problemM : X → R whose output is a loss for each point in data domain X . During each iteration during truncated backpropagation through time, P is used to compute parameter updates forM to obtain a trajectory of optimizee parameters θ1, θ2, . . . θN where for the ith data batch xi and parameters θi at step i, i.e. θi+1 = θi − P (∇θiM(xi, θi)).
For some weighting function f1, f2, . . . fN , the meta loss is Lmeta(x, θi;φ) = ∑N i=1 fi(M(xi, θi)); specifically, we will use the scaled log meta loss fi(m) = log(m)− log(M(xi, θ0)), which can be interpreted as the “mean log improvement.” Distillation Loss The distillation loss in knowledge distillation measures the distance between teacher predictions and student predictions. In training optimizers, this corresponds to the distance between the optimization trajectories generated by the teacher and student. Suppose we have optimizee parameter trajectories θi = (θ (P ) i , θ (T ) i ) generated by the teacher and student, respectively. Then, our distillation loss LT for teacher T is given by the l2 log-loss:
LT (x,θi;φ) = 1
N N∑ i=1 log ∥∥∥θ(P )i − θ(T )i ∥∥∥2 2 . (1)
While knowledge distillation generally refers to imitating a model and imitation learning imitating a policy, the optimizer in our case can be regarded as both a model and a policy. As such, our loss function is similar to the imitation loss mechanism used by Chen et al. (2020a), which can be thought of as a special case of optimizer amalgamation where only a single teacher is used.
3.3 AMALGAMATION OF MULTIPLE TEACHER OPTIMIZERS: THREE SCHEMES
Now, what if there are multiple teachers that we wish to amalgamate into a single policy? How to best combine different knowledge sources is a non-trivial question. We propose three mechanisms:
(1) Mean Amalgamation: adding distillation loss terms for each of the optimizers with constant equal weights.
(2) Min-max Amalgamation: using a min-max approach to combine loss terms for each of the optimizers, i.e., “the winner (worst) takes all”.
(3) Optimal Choice Amalgamation: First training an intermediate policy to choose the best optimizer to apply at each step, then distilling from that “choice optimizer”.
Mean Amalgamation In order to amalgamate our pool of teachers T = T1, . . . T|T |, we generate |T |+ 1 trajectories θi = (θ(P )i , θ (T1) i . . . θ (T|T |) i ) and add distillation losses for each teacher:
Lmean(x;θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 1 |T | |T |∑ i=1 log ∥∥∥θ(P )i − θ(Ti)i ∥∥∥ 2 . (2)
If we view knowledge distillation as a regularizer which provides soft targets during training, mean amalgamation is the logical extension of this by simply adding multiple regularizers to training.
An interesting observation is: when multiple teachers diverge, mean amalgamation loss tends to encourage the optimizer to choose one of the teachers to follow, potentially discarding the influence of all other teachers. This may occur if one teacher is moving faster than another in the optimizee space, or if the teachers diverge in the direction of two different minima. As this choice is a local minimum with respect to the mean log amalgamation loss, the optimizer may “stick” to that teacher, even if it is not the best choice.
Min-Max Amalgamation In order to address this stickiness, we propose a second method: minmax amalgamation, where, distillation losses are instead combined by taking the maximum distillation loss among all terms at each time step:
Lmin-max(x; θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 max T∈T log ∥∥∥θ(P )i − θ(T )i ∥∥∥ 2 . (3)
This results in a v-shaped loss landscape which encourages the amalgamation target to be between the trajectories generated by the teacher pool and prevents the optimizer from “sticking” to one of the teachers.
One weakness shared by both mean and min-max amalgamation is memory usage. Both require complete training trajectories for each teacher in the pool to be stored in memory, resulting in memory usage proportional to the number of teachers, which limits the number of teachers that we could amalgamate from in one pool.
Min-max amalgamation also does not fully solve the problem of diverging teachers. While minmax amalgamation does ensure that no teacher is ignored, it pushes the amalgamation target to the midpoint between the optimizee weights of the two teachers, which does not necessarily correspond to a good optimizee loss. In fact, when teachers diverge into multiple local minima, any solution which considers all teachers must necessarily push the learned optimizer against the gradient, while any solution which allows the learned optimizer to pick one side must discard a number of teachers.
Optimal Choice Amalgamation To fully unlock the power of knowledge amalgamation, we propose to solve the teacher divergence problem by first training an intermediate amalgamation target. By using only one teacher for a final distillation step, we remove the possibility of multiple teachers diverging while also allowing us to use more teachers without a memory penalty.
For optimizer pool T , we define an choice optimizer C which produces choices c1, c2, . . . cN of which optimizer in the pool to apply at each time step, producing updates θ(C)i+1 = θ (C) i − Tci(gi). The objective of the choice optimizer is to minimize the meta loss Lmeta(C;x) with respect to these choices c1:N . We parameterize the choice function C as a small two-layer LSTM, and train it by gradient descent. The LSTM takes the outputs of each optimizer in the pool, the layer type, and time step as inputs; more details are provided in Appendix C.1. To make it easier to train C by truncated back-propagation through time, we relax the choices c1:N to instead be soft choices ci ∈ R|T | : ci ≥ 0, ||ci||1 = 1, resulting in the policy θ(C)i+1 = θ (C) i − ∑|T | j=1 c (j) i Tj(gi). Now, we use C as a teacher to produce our final loss:
Lchoice = Lmeta(φ;x) + α 1
N n∑ i=1 log ∥∥∥θ(P )i − θ(C)i ∥∥∥ 2 . (4)
4 STABILITY-AWARE OPTIMIZER AMALGAMATION
4.1 MOTIVATION
Modern optimization, even analytical, is subject to various forms of noise. For example, stochastic first-order method are accompanied with gradient noise (Devolder et al., 2011; Gorbunov et al., 2020; Simsekli et al., 2019) which is often highly non-Gaussian and heavy-tail in practice. Any non-convex optimization could reach different local minimum when solving multiple times (Jain &
Kar, 2017). When training deep neural networks, thousands or even millions of optimization steps are typically run, and the final outcome can be impacted by the random initialization, (often non-optimal) hyperparameter configuration, and even hardware precision (De Sa et al., 2017). Hence, it is highly desirable for optimizers to be stable: across different problem instances, between multiple training runs for the same problem, and throughout each training run (Lv et al., 2017).
Meta-training optimizers tends to be unstable. During the amalgamation process, we encounter significant variance where identically trained replicates achieve varying performance on our evaluation problems; this mirrors problems with meta-stability encountered by Metz et al. (2019). While amalgamation variance can be mitigated in small-scale experiments by amalgamating many times and using the best one, that variance represents a significant obstacle to large-scale training (i.e. on many and larger problems) and deployment of amalgamated optimizers. Thus, besides the aforementioned optimization stability issues, we also need to consider meta-stability, denoting the relative performance of optimizers across meta-training replicates.
In order to provide additional stability to the amalgamation process, we turn to adding noise during training, which is known to improve smoothness (Chen & Hsieh, 2020; Lecuyer et al., 2019; Cohen et al., 2019) and in turn improve stability (Miyato et al., 2018). Note that one can inject either random noise or adversarial perturbations onto either the input or the weight of the learnable optimizer. While perturbing inputs is more common, recent work (Wu et al., 2020) identified that a flatter weight loss landscape (loss change with respect to weight) leads to smaller robust generalization gap in adversarial training, thanks to its more “global” worst-case view.
We also discover in our experiments that perturbing inputs would make the meta-training hard to converge, presumably because the inputs to optimizers (gradients, etc.) already contain large amounts of batch noise and do not tolerate further corruption. We hence focus on perturbing optimizer weights for smoothness and stability.
4.2 WEIGHT SPACE PERTURBATION FOR SMOOTHNESS
Weight space smoothing produces a noised estimate of the loss L̃ by adding noise to the optimizer parameters φ. By replacing the loss L(φ,x) with a noisy loss L̃ = L(φ̃,x), we encourage the optimizer to be robust to perturbations of its weights, increasing the meta-stability. We explore two mechanisms to increase weight space smoothness during training, by adding (1) a random perturbation to the weights as a gradient estimator, and (2) an adversarial perturbation in the form of a projected gradient descent attack (PGD).
Though new to our application, these two mechanisms have been adopted for other problems where smoothness is important such as neural architecture search (Chen & Hsieh, 2020) and adversarial robustness (Lecuyer et al., 2019; Cohen et al., 2019).
Random Gaussian Perturbation In the first type of noise, we add gaussian noise with variance σ2 to each parameter of the optimizer at each iteration, φ̃ = φ+N (0, σ2I). Since optimizer weights tend to vary largely in magnitude especially between different weight tensors, we modify this gaussian noise to be adaptive to the magnitude of the l2 norm of each weight tensor φ̃(w). For tensor size |φ(w)|, the added noise is given by
φ̃(w) = φ(w) +N ( 0, σ2
||φ(w)||22 |φ(w)| I
) . (5)
Projected Gradient Descent For the second type of noise, we use adversarial noise obtained by projected gradient descent (Appendix A, Algorithm 2). For A adversarial steps, the noised parameters are given by φ̃ = φ+ ψA, where ψ0 = 0, and ψi+1 = ψi + η clipε(∇ψ̃iL) for optimizer loss L.
As with random Gaussian perturbations, we also modify the adversarial perturbation to be adaptive with magnitude proportional to the l2 norm of each weight tensor φ. Here, the adversarial attack step for weight tensor w is instead given by
ψ (w) i+1 = ψ (w) i + ε||φ||2∇ψ(w)i L ||∇
ψ (w) i L||2
. (6)
5 EXPERIMENTS
Optimizee Details All optimizers were amalgamated using a 2-layer convolutional neural network (CNN) on the MNIST (LeCun & Cortes, 2010) dataset (shortened as “Train”) using a batch size of 128. During evaluation, we test the generalization of the amalgamated optimizer to other problems: ¶ Different Datasets: FMNIST (Xiao et al., 2017) and SVHN (Netzer et al., 2011). We also run experiments on CIFAR (Krizhevsky et al., 2009); since the Train network is too small to obtain reasonable performance on CIFAR, we substitute it for the Wider architecture and a 28-layer ResNet (He et al., 2015) labelled “CIFAR” and “ResNet” respectively. · Different Architectures: a 2-layer MLP (MLP), a CNN with twice the number of units in each layer (Wider), and a deeper CNN (Deeper) with 5 convolutional layers. ¸ Training settings: training with a smaller batch size of 32 (Small Batch). We also try a new setting of training with differential privacy (Abadi et al., 2016) (MNIST-DP). Appendix B provides full architecture and training specifications.
Optimizer Pool We use two different optimizer pools in our experiment: “small,” which consists of Adam and RMSProp, and “large,” which also contains SGD, Momentum, AddSign, and PowerSign. Each optimizer has a learning rate tuned by grid search over a grid of {5 × 10−4, 1 × 10−3, 2 × 10−3, . . . 1}. The selection criteria is the best validation loss after 5 epochs for the Train network on MNIST, which matches the meta-training settings of the amalgamated optimizer. Appendix C.2 describes the optimizers used and other hyperparameters.
Baselines First, we compare our amalgamated optimizer against our analytical optimizer teachers which are combined into a “oracle optimizer,” which is the optimizer in our pool of teachers with the best validation loss. We also compare against the optimal choice optimizer used in amalgamation, which functions like a per-iteration trained approximation of the oracle optimizer. Then, we evaluate previous learned optimizer methods: the original “Learning to Learn by Gradient Descent by Gradient Descent” optimizer Andrychowicz et al. (2016) which we refer to as “Original”, RNNProp (Lv et al., 2017), a hierarchical architecture presented by “Learned Optimizers that Scale and Generalize” (Wichrowska et al., 2017) which we refer to as “Scale”, and the best setup from Chen et al. (2020a) which shorten as “Stronger Baselines.”
Training and Evaluation Details The RNNProp amalgamation target was trained using truncated backpropagation though time with a constant truncation length of 100 steps and total unroll of up to 1000 steps and meta-optimized by Adam with a learning rate of 1 × 10−3. For our training process, we also apply random scaling (Lv et al., 2017) and curriculum learning (Chen et al., 2020a); more details about amalgamation training are provided in Appendix C.3. Amalgamation takes up to 6.35 hours for optimal choice amalgamation using the large pool and up to 10.53 hours when using adversarial perturbations; a full report of training times is provided in Appendix C.4.
For each optimizer amalgamation configuration tested, we independently trained 8 replicate optimizers. Then, each replicate was evaluated 10 times on each evaluation problem, and trained to a depth of 25 epochs each time. Finally, we measure the stability of amalgamated optimizers by defining three notions of stability for meta-trained optimizers: ¶ Optimization stability: the stability of the optimizee during the optimization process. Viewing stability of the validation loss as a proxy for model stability with respect to the true data distribution, we measure the epoch-to-epoch variance of the validation loss after subtracting a smoothed validation loss curve (using a Gaussian filter). · Evaluation stability: the variance of optimizer performance across multiple evaluations. We find that the evaluation stability is roughly the same for all optimizers (Appendix E.1). · Meta-stability: the stability of the amalgamation process, i.e. the variance of amalgamation replicates after correcting for evaluation variance. Meta-stability and evaluation stability are jointly estimated using a linear mixed effects model. The stability is reported as a standard deviation. More details are in Appendix D.
5.1 OPTIMIZER AMALGAMATION
Amalgamation Methods Figure 1 compares the mean performance of the three amalgamation methods with the small pool and Choice amalgamation with the large pool. Mean and min-max amalgamation were not performed on the large pool due to memory constraints. The amalgamated optimizers using optimal choice amalgamation perform better than Mean and Min-Max amalgamation. The size of the optimizer pool does not appear to have a significant effect in Optimal Choice amalgamation, with small pool and large pool amalgamated optimizers obtaining similar results.
3.50 3.45 3.40 3.35 3.30 Best Log Validation Loss
0.06
0.07
0.08
0.09
0.10
0.11
0.12
Op tim
iza tio
n St
ab ilit
y
Amalgamated Adam RMSProp SGD Momentum AddSign PowerSign
pool (Figure 5). Comparing analytical optimizers, we observe a general inverse relationship between optimization performance and optimization stability: in order to achieve better optimization, an optimizer typically sacrifices some optimization stability in order to move faster through the optimizee weight space. By integrating problem-specific knowledge, the amalgamated optimizer is able to combine the best optimization performance and optimization stability characteristics (Figure 4).
5.2 STABILITY-AWARE OPTIMIZER AMALGAMATION
Input Perturbation While we also tested perturbing the inputs of the optimizer during amalgamation, we were unable to improve stability. These experiments are included in Appendix E.4.
Random Perturbation Min-max amalgamation was trained on the small optimizer pool with random perturbation relative magnitudes of ε = {5 × 10−4, 10−3, 2 × 10−3, 5 × 10−3, 10−2}. ε = 10−1 was also tested, but all replicates tested diverged and are not reported here.
Comparing perturbed amalgamation against the nonperturbed baseline (ε = 0), we observe that perturbations increase meta-stability up to about ε = 10−3 (Figure 6). For larger perturbation magnitudes, meta-stability begins to decrease as the perturbation magnitude overwhelms the weight “signal,” eventually causing the training process to completely collapse for larger perturbation values. While the stability with random perturbation ε = 10−2 is better than 10−3, this is likely due to random chance, since we use a small sample size of 8 replicates.
Adversarial Perturbation Since adversarial perturbation is more computationally expensive than random perturbations, min-max amalgamation was tested on a coarser grid of relative magnitudes ε = {10−4, 10−3, 10−2}, and to an adversarial attack depth of 1 step. These results are also reported in Figure 6, with ε = 10−2 omitted since all replicates diverged during training.
From our results, we observe that adversarial perturbations are about as effective as random perturbations. We also observe that the maximum perturbation magnitude that the amalgamation process can tolerate is much smaller for adversarial perturbations compared to random perturbations, likely because adversarial perturbations are much “stronger.” Due to the significantly larger training cost of adversarial perturbations, we recommend random perturbations for future work.
Application to Other Methods Random and Adversarial perturbations can be applied to any gradient-based optimizer meta-training method, including all of our baselines. An experiment applying Gaussian perturbations to the RNNProp baseline can be found in Appendix E.5.
6 CONCLUSION
We define the problem of optimizer amalgamation, which we hope can inspire better and faster optimizers for researchers and practitioners. In this paper, we provide a procedure for optimizer amalgamation, including differentiable optimizer amalgamation mechanisms and amalgamation stability techniques. Then, we evaluate our problem on different datasets, architectures, and training settings to benchmark the strengths and weaknesses of our amalgamated optimizer. In the future, we hope to bring improve the generalizability of amalgamated optimizers to even more distant problems.
A ALGORITHMS
In this section, we provide a detailed description of the key algorithms used in our paper.
Truncated Back-propagation: Algorithm 1 shows truncated back-propagation applied to optimizer amalgamation. For an unrolling length N , N data points (batches, in the case of mini-batch SGD) are sampled, which are split into N/t truncations of length t. Note that this requires N to be divisible by t; in our implementation, we require t and N/t to be specified as integers. For each truncation, the optimizee and teachers are trained for t iterations, and meta-gradients are computed over that truncation and applied.
Adversarial Weight Perturbation: Algorithm 2 shows adversarial perturbations applied to optimizer amalgamation. For each adversarial attack step, meta-gradients are taken with respect to the parameters, and are normalized for each tensor with respect to its tensor norm before being applied as an adversarial perturbation.
Algorithm 1: Distillation by Truncated Back-propagation Inputs :Amalgamation loss La
Policy P with parameters φ Teacher policies T = T1, . . . T|T | OptimizeeM,X , θ0 Unrolling, truncation lengths N , t
Outputs :Updated policy parameters φ Sample N data points x1, . . .xN fromX . θ (P ) 0 = θ (T1) 0 = . . . = θ (T|T |) 0 = θ0 for i = 1, 2, . . . N/t do for j = 1, 2, . . . t do
n = (i− 1)t+ j Update optimizee for P : θ (P ) n+1 ← θ (P ) n −P [ ∇M(xn, θ(P )n ) ] Update optimizees for each teacher: for k = 1, . . . |T | do
θ (Tk) n+1 ← θ (Tk) n − Tk [ ∇M(xn, θ(Tk)n ) ] end
end Compute distillation loss: Li ← La(x[(i−1)t:it],θ[(i−1)t:it];φ) Update φ using∇Li
end
Algorithm 2: Adversarial Weight Perturbation for Truncated Back-propagation Inputs :Truncated back-propagation
parameters La, P , φ, T ,M,X , θ0, N , t Adversarial attack steps A
Outputs :Updated policy parameters φ Sample N data points and initialize
optimizee parameters for i = 1, 2, . . . N/t do
ψ0 ← 0 for a = 1, 2, . . . A do
Compute trajectories θ[(i−1)t:it] for P and T L(a)i ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψa) for each weight tensor w do
γ ← ∇ ψ (w) i L(a)i
||∇ ψ (w) i L(a)i ||2
ψ(w)a ← ψa−1 + ε||φ||2γ. end
end Li ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψA) Update φ using∇Li
end
B OPTIMIZEE DETAILS
Table 1 shows a summary of the training problems used. While all training is performed on a 2-layer CNN on MNIST, we evaluated our optimizer on 4 different datasets described in (B.1) and 5 different architectures (described in B.2). We also experiment with different training settings, which are described in B.3.
B.1 DATASETS
All datasets used are classification datasets, with cross entropy used as the training loss. The MNIST dataset (LeCun & Cortes, 2010) is used during training; the other datasets are, from most to least similar, are:
Sample images from these datasets are shown in Figure 7. All datasets were accessed using TensorFlow Datasets and have a CC-BY 4.0 license.
B.2 ARCHITECTURES
The Train convolutional network (Table 2a) has one convolution layer with 16 3x3 filters and one convolution layer with 32 5x5 filters. Each convolution layer uses ReLU activation, has stride 1x1, and is followed by a max pooling layer with size 2x2. Finally, a fully connected softmax layer is used at the output.
The four architectures evaluated are:
1. MLP: a 2-layer MLP with 20 hidden units and sigmoid activation 2. Wider: a modified version of Train with double width on each layer (Table 2b) 3. Deeper: a deeper network with 5 convolutional layers instead of 2 (Table 2c), again using
ReLU activation and 1x1 stride 4. ResNet: a 28-layer ResNet (He et al., 2015) (Table 2d)
B.3 OPTIMIZEE TRAINING
During training, a batch size of 128 is used except for the Small Batch evaluation, which has a batch size of 32. During training and evaluation, datasets are reshuffled each iteration.
To match the warmup process used in meta-training, warmup is also applied during evaluation. The SGD learning rate is fixed at 0.01, which is a very conservative learning rate which does not optimize quickly, but is largely guaranteed to avoid divergent behavior.
For differentially private training, we implement differentially private SGD (Abadi et al., 2016). In differentially private SGD, gradients are first clipped to a fixed l2 norm ε on a per-sample basis; then, gaussian noise with standard deviation σε where σ > 1 is added to the aggregated batch gradients. In our experiments, we use clipping norm ε = 1.0 and noise ratio σ = 1.1. Both MNIST and KMNIST are used as training sets in order to simulate transfer from a non-private dataset (MNIST) used for meta-training to a private dataset (KMNIST).
C AMALGAMATION DETAILS
C.1 OPTIMIZER ARCHITECTURES
In this section, we provide the exact architecture specifications and hyperparameters of our amalgamated optimizer along with other training details and training time. Our implementation is open source, and can be found here: http://github.com/VITA-Group/OptimizerAmalgamation.
C.1.1 RNNPROP ARCHITECTURE
For our amalgamation target, we use RNNProp architecture described by Lv et al. (2017). For each parameter on each time step, this architecture takes as inputs RMSProp update g/ √ v̂ and Adam update m̂/ √ v̂ using momentum (m̂) decay parameter β1 = 0.9 and variance (v̂) decay parameter β2 = 0.999, matching the values used for our analytical optimizers. These values pass through a 2-layer LSTM with tanh activation, sigmoid recurrent activation, and 20 units per layer. The output of this 2-layer LSTM is passed through a final fully connected layer with tanh activation to produce a scalar final update for each parameter.
C.1.2 CHOICE NETWORK ARCHITECTURE
Our Choice network for Optimal Choice Amalgamation is a modified RNNProp architecture. The update steps for each analytical optimizer are given as inputs to the same 2-layer LSTM used in RNNProp. Additionally, the current time step and tensor number of dimensions are provided, with the number of dimensions being encoded as a one-hot vector.
Then, instead of directly using the output of a fully connected layer as the update, LSTM output passes through a fully connected layer with one output per optimizer in the pool. This fully connected layer has a softmax activation, and is used as weights to combine the analytical optimizer updates.
C.2 OPTIMIZER POOL
We consider six optimizers as teachers in this paper: Adam, RMSProp, SGD, Momentum, AddSign, and PowerSign. These optimizers are summarized in table 3.
Table 3: Optimizer pool update rules; all updates include an additional learning rate hyperparameter.
Optimizer Update Rule SGD g
Momentum m̂ RMSProp g/ √ v̂
Adam m̂/ √ v̂
AddSign g(1 + sign(m̂)sign(g)) PowerSign g exp(sign(m̂)sign(g))
Joining the popular hand-crafted optimizers Adam, RMSProp, SGD, and Momentum, AddSign and PowerSign are two optimizers discovered by neural optimizer search (Bello et al., 2017). These two optimizers share the design principle that update steps should be larger when the momentum and gradient are in agreement:
AddSign ∝ g(1 + sign(m̂)sign(g)) PowerSign ∝ g exp(sign(m̂)sign(g)). (7)
Here, g represents the gradient, m̂ an exponential moving average of the gradient. In order to use AddSign and Powersign as teachers for gradient-based distillation, we modify them to be differentiable by replacing the sign function with a scaled tanh with magnitudes normalized by √ v̂:
sign(m̂)sign(g) ≈ tanh(m̂/ √ v̂)tanh(g/ √ v̂) (8)
By dividing by √ v̂, we provide a consistent magnitude to the tanh function so that sign agreement mechanism is not affected by overall gradient magnitudes.
For all optimizers, the momentum decay parameter is set to β1 = 0.9, the variance decay parameter is set to β2 = 0.999, and the learning rate multiplier is found by grid search on the Train optimizee over a grid of {5× 10−4, 1× 10−3, 2× 10−3, . . . 1}.
C.3 ADDITIONAL TRAINING DETAILS
During amalgamation, we apply a number of techniques from previous Learning to Optimize literature in order to boost training:
• Curriculum Learning: We apply curriculum learning (Chen et al., 2020a) to progressively increase the unrolling steps across a maximum of 4 stages with length 100, 200, 500, and 1000. During curriculum learning, checkpoints are saved and validated every 40 “meta-epochs,” which refers to a single optimizee trajectory trained with truncated back-propagation.
• Random Scaling: We apply random scaling (Lv et al., 2017) to reduce overfitting to the gradient magnitudes of the training problem. This random scaling is only applied to the amalgamation target; amalgamation teachers receive “clean” (unscaled) gradients.
• Warmup: Instead of initializing each training optimizee with random weights, we first apply 100 steps of SGD optimization as a “warmup” to avoid the turbulent initial phase of optimizing neural networks. A SGD learning rate of 0.01 is used during this period, and was chosen to be very conservative on all optimizees tested.
These techniques are also applied to all of our baselines, except for Random Scaling is only applied to baselines using the RNNProp architecture since we find that it harms the performance of other optimizer architectures.
C.4 TRAINING COST
Table 4 provides a summary of the training costs for each amalgamation method and baseline are provided. For optimal choice amalgamation, this includes both training the optimal choice optimizer and amalgamation training. All values are reported as the mean across 8 replicates.
All experiments were run on single nodes with 4x Nvidia 1080ti GPUs, providing us with a metabatch size of 4 simultaneous optimizations. In order to replicate our results, GPUs with at least 11GB of memory are required, though less memory can be used if the truncation length for truncated back-propagation is reduced.
D STABILITY DEFINITIONS
In this section, we provide the mathematical definition and measurement details of meta-stability, evaluation stability, and optimization stability.
D.1 META-STABILITY AND EVALUATION STABILITY
In order to quantify meta-stability and evaluation stability, we first summarize the performance of each evaluation using the best validation loss obtained and the training loss of the last epoch. Then, we model the best validation loss Y (val)ij and final training loss Y (train) ij for replicate i and evaluation j with the linear mixed effect model
Yij = µ+ αi + εij , (9)
where µ is the true mean, αi are IID random variables representing the meta-stability of the amalgamated optimizer, and εij are IID random variables representing the evaluation stability of each replicate. The meta-stability and evaluation stability are then quantified by standard deviations σα and σε.
D.2 OPTIMIZATION STABILITY
To measure optimization stability, we model the validation loss Lij(t) at epoch t for replicate i and evaluation j as
Lij(t) = βij(t) + η (t) ij (10)
for smooth function βij(t) which represents the behavior of the evaluation and random variable η (t) ij which captures the optimization stability; we assume that η(t)ij is IID with respect to t.
In order to estimate ση , we first estimate βij(t) by applying a Gaussian filter with standard deviation σ = 2 (epochs) and filter edge mode “nearest,” and σ̂(ij)η is calculated accordingly. Finally, σ (ij) η is treated as a summary statistic for each evaluation, and the mixed effect model described previously (Equation 9) is fit to obtain a final confidence interval for the mean optimization stability.
E ADDITIONAL RESULTS
E.1 EVALUATION STABILITY
Table 5 summarizes the evaluation stability of analytical and amalgamated optimizers. All optimizers obtain similar evaluation stability, except for cases where an optimizer cannot reliably train the optimizee at all such as Momentum, AddSign, and PowerSign on the Deeper CNN. In these cases, the optimizer consistently learns a constant or random classifier, which results in very low variance and high “stability.”
E.2 LARGER EVALUATIONS
In order to explore the limits of our optimizer, we evaluated the amalgamated optimizer with a 52-layer ResNet (763882 parameters — 40x the train network size), and the same 52-layer ResNet on CIFAR-100 instead of CIFAR-10. These results are compared to CIFAR-10 on a 2-layer network and CIFAR-10 on a 28-layer ResNet using Adam as a single baseline (Figure 8).
While our amalgamated optimizer has significant performance advantages on the shallow CIFAR-10 network in our original evaluations and achieves performance parity in the 28-layer ResNet, the amalgamated optimizer can no longer perform as well as the oracle once we reach 52 layers and change to CIFAR-100.
E.3 ADDITIONAL PLOTS
In this section, we include plots providing alternate versions of Figures 2, 3, and 5 in the main text which had some outliers cropped out in order to improve readability.
E.4 INPUT PERTURBATION
When applying input perturbations, we perturb the inputs to the optimizer, or the optimizee gradients, instead of the optimizer weights:
θi+1 = θi − P (∇θiM(xi, θi) +N (0, σ2I)). (11)
We tested magnitudes σ = 10−1 and σ = 10−2 on a smaller experiment size of 6 replicates using Choice amalgamation on the small pool as a baseline; these results are given in Table 6. Many experiment variants remain relating to input noise such as adding noise proportional to parameter norm or gradient norm or trying smaller magnitudes of noise, and this may be a potential area of future study. However, we believe that input noise is generally not helpful to optimizer amalgamation, and did not study it further.
E.5 BASELINES WITH RANDOM PERTURBATION
Our perturbation methods can be applied to any gradient-based optimizer meta-training method, including all of our baselines. To demonstrate this application, we trained 8 RNNProp replicates with Gaussian perturbations with magnitude 1× 10−4; all other settings were identical to the RNNProp baseline. With perturbations, the RNNProp baseline is significantly improved, though not enough to match the performance of our amalgamation method. | 1. What is the focus and contribution of the paper on optimizer amalgamation?
2. What are the strengths of the proposed approach, particularly in its effectiveness and differentiation from other works?
3. What are the weaknesses of the paper, especially regarding its comparisons with prior arts and lack of discussions? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a new optimizer amalgamation method to combine a pool of optimizers into one in order to achieve stronger problem-specific performance. Three differentiable amalgamation mechanisms are designed and stabilization methods are explored. The proposed method is empirically shown to be effective when compared to a large number of baselines.
Review
Strengths
The paper is generally well-written and easy to follow.
The proposed optimizer amalgamation method is interesting and effective.
The experiment is comprehensive and supports the main claim of the paper.
Weakness
Selecting an appropriate optimizer for a given problem is not a new research topic, and many papers exist in the areas of algorithm selection, algorithm portfolio, and meta-learning. I think one weakness of the paper is that it lacks the discussion about the related work, and how the paper can be better positioned in the literature. |
ICLR | Title
Optimizer Amalgamation
Abstract
Selecting an appropriate optimizer for a given problem is of major interest for researchers and practitioners. Many analytical optimizers have been proposed using a variety of theoretical and empirical approaches; however, none can offer a universal advantage over other competitive optimizers. We are thus motivated to study a new problem named Optimizer Amalgamation: how can we best combine a pool of “teacher” optimizers into a single “student” optimizer that can have stronger problem-specific performance? In this paper, we draw inspiration from the field of ”learning to optimize” to use a learnable amalgamation target. First, we define three differentiable amalgamation mechanisms to amalgamate a pool of analytical optimizers by gradient descent. Then, in order to reduce variance of the amalgamation process, we also explore methods to stabilize the amalgamation process by perturbing the amalgamation target. Finally, we present experiments showing the superiority of our amalgamated optimizer compared to its amalgamated components and learning to optimize baselines, and the efficacy of our variance reducing perturbations. Our code and pre-trained models are publicly available at http://github.com/VITA-Group/OptimizerAmalgamation.
1 INTRODUCTION
Gradient-based optimization is ubiquitous in machine learning; accordingly, a cottage industry of gradient-based optimizer design has emerged (Schmidt et al., 2020). These optimizers generally propose algorithms that aim to make the “best” parameter update for a computed gradient (Kingma & Ba, 2017; Liu et al., 2020), with some also modifying the location where the parameters are computed (Zhang et al., 2019b). However, each gradient-based optimizer claim specific problems where they hold performance advantages, none can claim to be universally superior. Due to the “No Free Lunch” theorem for optimization (Wolpert & Macready, 1997), no optimizer can provide better performance on a class of problems without somehow integrating problem-specific knowledge from that class.
Furthermore, problems such as training neural networks are not homogeneous. In the spatial dimension, different layers or even parameters can have different behavior (Chen et al., 2020b). Also, as evidenced by the popularity of learning rate schedules, neural network optimization also behaves very differently in the temporal dimension as well (Golatkar et al., 2019). This implies that no optimizer can provide the best performance for all parameters on a single problem or best performance over the entire optimization process.
In order to build a stronger optimizer, we propose the new problem of optimizer amalgamation: how can we best combine a pool of multiple “teacher” optimizers, each of which might be good in certain cases, into a single stronger “student” optimizer that integrates their strengths and offsets their weaknesses? Specifically, we wish for our combined optimizer to be adaptive both per-parameter and per-iteration, and exploit problem-specific knowledge to improve performance on a class of problems.
To “amalgamate” an optimizer from a pool of optimizers, we draw inspiration from recent work in Learning to Optimize (Chen et al., 2021a) which provides a natural way to parameterize and train optimization update rules. In Learning to Optimize, optimizers are treated as policies to be learned from data. These “learned” optimizers are typically parameterized by a recurrent neural network
*Work done while the author was at the University of Texas at Austin.
(Andrychowicz et al., 2016; Lv et al., 2017); then, the optimizer is meta-trained to minimize the loss of training problems, or “optimizees”, by gradient descent using truncated back-propagation through time. Yet to our best knowledge, no existing work has leveraged those learnable parameterizations to amalgamate and combine analytical optimizers.
For our proposed formulation of optimizer amalgamation, we treat the learned optimizer as the amalgamation target. Then, we define amalgamation losses which can be used to combine feedback from multiple analytical optimizers into a single amalgamated optimizer, and present several amalgamation schemes. Finally, we explore smoothing methods that can be used during the amalgamation process to reduce the variance of the amalgamated optimizers. Our contributions are outlined below:
• We formulate the new problem of optimizer amalgamation, which we define as finding a way to best amalgamate a pool of multiple analytical optimizers to produce a single stronger optimizer. We present three schemes of optimizer amalgamation: additive amalgamation, min-max amalgamation, and imitation of a trained choice.
• We observe instability during the amalgamation process which leads to amalgamated optimizers having varied performance across multiple replicates. To mitigate this problem, we explore ways to reduce amalgamation variance by improving smoothness of the parameter space. We propose smoothing both by random noise or adversarial noise.
• We present experiments showing extensive and consistent results that validate the effectiveness of our proposal. Specifically, we find that more advanced amalgamation techniques and weight space training noise lead better average case performance and reduced variance. We also show that our amalgamation method performs significantly better than previous methods on all problems, with few exceptions.
2 RELATED WORKS
Knowledge Distillation and Amalgamation The prototype of knowledge distillation was first introduced by (Bucilua et al., 2006), which used it for model compression in order train neural networks (“students”) to imitate the output of more complex models (“teachers”). Knowledge distillation was later formalized by (Hinton et al., 2015), who added a temperature parameter to soften the teacher predictions and found significant performance gains as a result.
The success of knowledge distillation spurred significant efforts to explain its effectiveness. Notably, Chen et al. (2020c); Yuan et al. (2020) discovered that trained distillation teachers could be replaced by hand-crafted distributions. (Yuan et al., 2020) provided further theoretical and empirical explanation for this behavior by explicitly connecting Knowledge distillation to label smoothing, and (Ma et al.; Chen et al., 2021b) further credited the benefits of knowledge distillation to the improved smoothness of loss surfaces, which has been demonstrated to help adversarial training Cohen et al. (2019); Lecuyer et al. (2019) and the training of sparse neural networks Ma et al..
The potential of knowledge distillation to improve the training of neural networks also spurred diverse works extending knowledge distillation. For example, (Romero et al., 2015; Wang et al., 2018; Shen et al., 2018; 2019b; Ye et al., 2020b) propose using intermediate feature representations as distillation targets instead of just network outputs, and (Tarvainen & Valpola, 2017; Yang et al., 2018; Zhang et al., 2019a) unify student and teacher network training to reduce computational costs. Knowledge distillation has also been extended to distilling multiple teachers, which is termed Knowledge Amalgamation (Shen et al., 2019a; Luo et al., 2019; Ye et al., 2019; 2020a).
Although using output logits from pre-trained networks has been extensively explored in knowledge distillation, we study a new direction of research distilling optimization knowledge from sophisticated analytical optimizers to produce stronger “learned” optimizers, hence the name “optimizer amalgamation”. Not only this is a new topic never studied by existing knowledge distillation literature, but also it needs to distill longitudinal output dynamics — not one final output — from multiple teachers.
Learning to optimize Learning to Optimize is a branch of meta learning which proposes to replace hand-crafted analytical optimizers with learned optimizers trained by solving optimization problems, or optimizees. The concept was first introduced by (Andrychowicz et al., 2016), who used a Long Short-Term Memory (LSTM) based model in order to parameterize gradient-based optimizers. This model took the loss gradient as its input and output a learned update rule which was then trained by
gradient descent using truncated backpropagation through time. (Andrychowicz et al., 2016) also established a coordinate-wise design pattern, where the same LSTM weights are applied to each parameter of the optimizee in order to facilitate generalization to models with different architectures.
Building on this architecture, Wichrowska et al. (2017) and Lv et al. (2017) proposed improvements such as hierarchical architectures connecting parameter RNNs together and augmenting the gradient with additional inputs. Many methods have also been proposed to improve the training of learned optimizers such as random scaling and convex augmentation (Lv et al., 2017), curriculum learning and imitation learning (Chen et al., 2020a), and Jacobian regularization (Li et al., 2020). Notably, Chen et al. (2020a) also proposed a method of imitation learning, which can be viewed as a way of distilling a single analytical optimizer into a learned parameterization.
Learning to Optimize has been extended to a variety of other problems such as graph convolutional networks (You et al., 2020), domain generalization (Chen et al., 2020b), noisy label training (Chen et al., 2020c), adversarial training (Jiang et al., 2018; Xiong & Hsieh, 2020), and mixmax optimization (Shen et al., 2021). Moving away from gradient-based optimization, black-box optimization has also been explored (Chen et al., 2017; Cao et al., 2019). For a comprehensive survey with benchmarks, readers may refer to Chen et al. (2021a).
Perturbations and Robustness The optimization process is naturally subject to many possible sources of noise, such as the stochastic gradient noise Devolder et al. (2011); Gorbunov et al. (2020); Simsekli et al. (2019) which is often highly non-Gaussian and heavy-tail in practice; the random initialization and (often non-optimal) hyperparameter configuration; the different local minimum reached each time in non-convex optimization Jain & Kar (2017); and the limited numerical precision in implementations De Sa et al. (2017). The seen and unseen optimizees also constitute domain shifts in our case. In order for a consistent and reliable amalgamation process, the training needs to incorporate resistance to certain perturbations of the optimization process.
We draw inspiration from deep learning defense against various random or malicious perturbations. For example, stability training Zheng et al. (2016) stabilizes deep networks against small input distortions by regularizing the feature divergence caused by adding random Gaussian noises to the inputs. Adversarial robustness measures the ability of a neural network to defend against malicious perturbations of its inputs (Szegedy et al., 2013; Goodfellow et al., 2014). For that purpose, random smoothening (Lecuyer et al., 2019; Cohen et al., 2019) and adversarial training (Madry et al., 2017) have been found to increase model robustness with regard to random corruptions or worst-case perturbations; as well as against testing-time domain shifts Ganin et al. (2016). Recent work (He et al., 2019; Wu et al., 2020) extends input perturbations to weight perturbations that explicitly regularize the flatness of the weight loss landscape, forming a double-perturbation mechanism for both inputs and weights.
Other Approaches The problem of how to better train machine learning models has many diverse approaches outside the Learning to Optimize paradigm that we draw from, and forms the broader AutoML problem Hutter et al. (2018) together with model selection algorithms. Our approach falls under meta-learning, which also includes learned initialization approaches such as MAML (Finn et al., 2017) and Reptile (Nichol et al., 2018). Other optimizer selection and tuning methods include hypergradient descent (Baydin et al., 2017) and bayesian hyperparameter optimization (Snoek et al., 2012). Similar to our knowledge amalgamation approach, algorithm portfolio methods (where many algorithms are available, and some subset is selected) have also been applied to several problem domains such as Linear Programming (Leyton-Brown et al., 2003) and SAT solvers (Xu et al., 2008).
3 OPTIMIZER AMALGAMATION
3.1 MOTIVATION
Optimizer selection and hyperparameter optimization is a difficult task even for experts. With a vast number of optimizers to choose from with varying performance dependent on the specific problem and data (Schmidt et al., 2020), most practitioners choose a reasonable default optimizer such as SGD or Adam and tune the learning rate to be “good enough” following some rule of thumb.
As a consequence of the No Free Lunch theorem (Wolpert & Macready, 1997), the best optimizer to use for each problem, weight tensor within each problem, or each parameter may be different.
In practice, different layers within a given neural network can benefit from differently tuned hyperparameters, for example by meta-tuning learning rates by layer (Chen et al., 2020b).
Accordingly, we wish to train an optimizer which is sufficiently versatile and adaptive at different stages of training and even to each parameter individually. Many methods have been proposed to parameterize optimizers in learnable forms including coordinate-wise LSTMs Andrychowicz et al. (2016); Lv et al. (2017), recurrent neural networks with hierarchical architectures Wichrowska et al. (2017); Metz et al. (2019), and symbolically in terms of predefined blocks Bello et al. (2017). Due to its high expressiveness and relative ease of training, we will use the workhorse of LSTM-based RNNProp architecture described by Lv et al. (2017) as our amalgamation target; more details about this architecture can be found in Appendix C.1.
3.2 THE BASIC DISTILLATION MECHANISM
Knowledge distillation can be viewed as regularizing the training loss with a distillation loss that measures the distance between teacher and student predictions (Hinton et al., 2015). In order to distill a pool of teacher optimizers T = T1, T2, . . . Tk into our target policy P by truncated backpropagation (Appendix A: Algorithm 1), we start by defining a training loss and amalgamation loss. Meta Loss In the context of training optimizers, the training loss is described by the meta loss, which is a function of the optimizee problem loss at each step (Andrychowicz et al., 2016). Suppose we are training a policy P with parameters φ on a problemM : X → R whose output is a loss for each point in data domain X . During each iteration during truncated backpropagation through time, P is used to compute parameter updates forM to obtain a trajectory of optimizee parameters θ1, θ2, . . . θN where for the ith data batch xi and parameters θi at step i, i.e. θi+1 = θi − P (∇θiM(xi, θi)).
For some weighting function f1, f2, . . . fN , the meta loss is Lmeta(x, θi;φ) = ∑N i=1 fi(M(xi, θi)); specifically, we will use the scaled log meta loss fi(m) = log(m)− log(M(xi, θ0)), which can be interpreted as the “mean log improvement.” Distillation Loss The distillation loss in knowledge distillation measures the distance between teacher predictions and student predictions. In training optimizers, this corresponds to the distance between the optimization trajectories generated by the teacher and student. Suppose we have optimizee parameter trajectories θi = (θ (P ) i , θ (T ) i ) generated by the teacher and student, respectively. Then, our distillation loss LT for teacher T is given by the l2 log-loss:
LT (x,θi;φ) = 1
N N∑ i=1 log ∥∥∥θ(P )i − θ(T )i ∥∥∥2 2 . (1)
While knowledge distillation generally refers to imitating a model and imitation learning imitating a policy, the optimizer in our case can be regarded as both a model and a policy. As such, our loss function is similar to the imitation loss mechanism used by Chen et al. (2020a), which can be thought of as a special case of optimizer amalgamation where only a single teacher is used.
3.3 AMALGAMATION OF MULTIPLE TEACHER OPTIMIZERS: THREE SCHEMES
Now, what if there are multiple teachers that we wish to amalgamate into a single policy? How to best combine different knowledge sources is a non-trivial question. We propose three mechanisms:
(1) Mean Amalgamation: adding distillation loss terms for each of the optimizers with constant equal weights.
(2) Min-max Amalgamation: using a min-max approach to combine loss terms for each of the optimizers, i.e., “the winner (worst) takes all”.
(3) Optimal Choice Amalgamation: First training an intermediate policy to choose the best optimizer to apply at each step, then distilling from that “choice optimizer”.
Mean Amalgamation In order to amalgamate our pool of teachers T = T1, . . . T|T |, we generate |T |+ 1 trajectories θi = (θ(P )i , θ (T1) i . . . θ (T|T |) i ) and add distillation losses for each teacher:
Lmean(x;θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 1 |T | |T |∑ i=1 log ∥∥∥θ(P )i − θ(Ti)i ∥∥∥ 2 . (2)
If we view knowledge distillation as a regularizer which provides soft targets during training, mean amalgamation is the logical extension of this by simply adding multiple regularizers to training.
An interesting observation is: when multiple teachers diverge, mean amalgamation loss tends to encourage the optimizer to choose one of the teachers to follow, potentially discarding the influence of all other teachers. This may occur if one teacher is moving faster than another in the optimizee space, or if the teachers diverge in the direction of two different minima. As this choice is a local minimum with respect to the mean log amalgamation loss, the optimizer may “stick” to that teacher, even if it is not the best choice.
Min-Max Amalgamation In order to address this stickiness, we propose a second method: minmax amalgamation, where, distillation losses are instead combined by taking the maximum distillation loss among all terms at each time step:
Lmin-max(x; θi;φ) = Lmeta(x; θ(P )i ;φ) + α 1
N N∑ i=1 max T∈T log ∥∥∥θ(P )i − θ(T )i ∥∥∥ 2 . (3)
This results in a v-shaped loss landscape which encourages the amalgamation target to be between the trajectories generated by the teacher pool and prevents the optimizer from “sticking” to one of the teachers.
One weakness shared by both mean and min-max amalgamation is memory usage. Both require complete training trajectories for each teacher in the pool to be stored in memory, resulting in memory usage proportional to the number of teachers, which limits the number of teachers that we could amalgamate from in one pool.
Min-max amalgamation also does not fully solve the problem of diverging teachers. While minmax amalgamation does ensure that no teacher is ignored, it pushes the amalgamation target to the midpoint between the optimizee weights of the two teachers, which does not necessarily correspond to a good optimizee loss. In fact, when teachers diverge into multiple local minima, any solution which considers all teachers must necessarily push the learned optimizer against the gradient, while any solution which allows the learned optimizer to pick one side must discard a number of teachers.
Optimal Choice Amalgamation To fully unlock the power of knowledge amalgamation, we propose to solve the teacher divergence problem by first training an intermediate amalgamation target. By using only one teacher for a final distillation step, we remove the possibility of multiple teachers diverging while also allowing us to use more teachers without a memory penalty.
For optimizer pool T , we define an choice optimizer C which produces choices c1, c2, . . . cN of which optimizer in the pool to apply at each time step, producing updates θ(C)i+1 = θ (C) i − Tci(gi). The objective of the choice optimizer is to minimize the meta loss Lmeta(C;x) with respect to these choices c1:N . We parameterize the choice function C as a small two-layer LSTM, and train it by gradient descent. The LSTM takes the outputs of each optimizer in the pool, the layer type, and time step as inputs; more details are provided in Appendix C.1. To make it easier to train C by truncated back-propagation through time, we relax the choices c1:N to instead be soft choices ci ∈ R|T | : ci ≥ 0, ||ci||1 = 1, resulting in the policy θ(C)i+1 = θ (C) i − ∑|T | j=1 c (j) i Tj(gi). Now, we use C as a teacher to produce our final loss:
Lchoice = Lmeta(φ;x) + α 1
N n∑ i=1 log ∥∥∥θ(P )i − θ(C)i ∥∥∥ 2 . (4)
4 STABILITY-AWARE OPTIMIZER AMALGAMATION
4.1 MOTIVATION
Modern optimization, even analytical, is subject to various forms of noise. For example, stochastic first-order method are accompanied with gradient noise (Devolder et al., 2011; Gorbunov et al., 2020; Simsekli et al., 2019) which is often highly non-Gaussian and heavy-tail in practice. Any non-convex optimization could reach different local minimum when solving multiple times (Jain &
Kar, 2017). When training deep neural networks, thousands or even millions of optimization steps are typically run, and the final outcome can be impacted by the random initialization, (often non-optimal) hyperparameter configuration, and even hardware precision (De Sa et al., 2017). Hence, it is highly desirable for optimizers to be stable: across different problem instances, between multiple training runs for the same problem, and throughout each training run (Lv et al., 2017).
Meta-training optimizers tends to be unstable. During the amalgamation process, we encounter significant variance where identically trained replicates achieve varying performance on our evaluation problems; this mirrors problems with meta-stability encountered by Metz et al. (2019). While amalgamation variance can be mitigated in small-scale experiments by amalgamating many times and using the best one, that variance represents a significant obstacle to large-scale training (i.e. on many and larger problems) and deployment of amalgamated optimizers. Thus, besides the aforementioned optimization stability issues, we also need to consider meta-stability, denoting the relative performance of optimizers across meta-training replicates.
In order to provide additional stability to the amalgamation process, we turn to adding noise during training, which is known to improve smoothness (Chen & Hsieh, 2020; Lecuyer et al., 2019; Cohen et al., 2019) and in turn improve stability (Miyato et al., 2018). Note that one can inject either random noise or adversarial perturbations onto either the input or the weight of the learnable optimizer. While perturbing inputs is more common, recent work (Wu et al., 2020) identified that a flatter weight loss landscape (loss change with respect to weight) leads to smaller robust generalization gap in adversarial training, thanks to its more “global” worst-case view.
We also discover in our experiments that perturbing inputs would make the meta-training hard to converge, presumably because the inputs to optimizers (gradients, etc.) already contain large amounts of batch noise and do not tolerate further corruption. We hence focus on perturbing optimizer weights for smoothness and stability.
4.2 WEIGHT SPACE PERTURBATION FOR SMOOTHNESS
Weight space smoothing produces a noised estimate of the loss L̃ by adding noise to the optimizer parameters φ. By replacing the loss L(φ,x) with a noisy loss L̃ = L(φ̃,x), we encourage the optimizer to be robust to perturbations of its weights, increasing the meta-stability. We explore two mechanisms to increase weight space smoothness during training, by adding (1) a random perturbation to the weights as a gradient estimator, and (2) an adversarial perturbation in the form of a projected gradient descent attack (PGD).
Though new to our application, these two mechanisms have been adopted for other problems where smoothness is important such as neural architecture search (Chen & Hsieh, 2020) and adversarial robustness (Lecuyer et al., 2019; Cohen et al., 2019).
Random Gaussian Perturbation In the first type of noise, we add gaussian noise with variance σ2 to each parameter of the optimizer at each iteration, φ̃ = φ+N (0, σ2I). Since optimizer weights tend to vary largely in magnitude especially between different weight tensors, we modify this gaussian noise to be adaptive to the magnitude of the l2 norm of each weight tensor φ̃(w). For tensor size |φ(w)|, the added noise is given by
φ̃(w) = φ(w) +N ( 0, σ2
||φ(w)||22 |φ(w)| I
) . (5)
Projected Gradient Descent For the second type of noise, we use adversarial noise obtained by projected gradient descent (Appendix A, Algorithm 2). For A adversarial steps, the noised parameters are given by φ̃ = φ+ ψA, where ψ0 = 0, and ψi+1 = ψi + η clipε(∇ψ̃iL) for optimizer loss L.
As with random Gaussian perturbations, we also modify the adversarial perturbation to be adaptive with magnitude proportional to the l2 norm of each weight tensor φ. Here, the adversarial attack step for weight tensor w is instead given by
ψ (w) i+1 = ψ (w) i + ε||φ||2∇ψ(w)i L ||∇
ψ (w) i L||2
. (6)
5 EXPERIMENTS
Optimizee Details All optimizers were amalgamated using a 2-layer convolutional neural network (CNN) on the MNIST (LeCun & Cortes, 2010) dataset (shortened as “Train”) using a batch size of 128. During evaluation, we test the generalization of the amalgamated optimizer to other problems: ¶ Different Datasets: FMNIST (Xiao et al., 2017) and SVHN (Netzer et al., 2011). We also run experiments on CIFAR (Krizhevsky et al., 2009); since the Train network is too small to obtain reasonable performance on CIFAR, we substitute it for the Wider architecture and a 28-layer ResNet (He et al., 2015) labelled “CIFAR” and “ResNet” respectively. · Different Architectures: a 2-layer MLP (MLP), a CNN with twice the number of units in each layer (Wider), and a deeper CNN (Deeper) with 5 convolutional layers. ¸ Training settings: training with a smaller batch size of 32 (Small Batch). We also try a new setting of training with differential privacy (Abadi et al., 2016) (MNIST-DP). Appendix B provides full architecture and training specifications.
Optimizer Pool We use two different optimizer pools in our experiment: “small,” which consists of Adam and RMSProp, and “large,” which also contains SGD, Momentum, AddSign, and PowerSign. Each optimizer has a learning rate tuned by grid search over a grid of {5 × 10−4, 1 × 10−3, 2 × 10−3, . . . 1}. The selection criteria is the best validation loss after 5 epochs for the Train network on MNIST, which matches the meta-training settings of the amalgamated optimizer. Appendix C.2 describes the optimizers used and other hyperparameters.
Baselines First, we compare our amalgamated optimizer against our analytical optimizer teachers which are combined into a “oracle optimizer,” which is the optimizer in our pool of teachers with the best validation loss. We also compare against the optimal choice optimizer used in amalgamation, which functions like a per-iteration trained approximation of the oracle optimizer. Then, we evaluate previous learned optimizer methods: the original “Learning to Learn by Gradient Descent by Gradient Descent” optimizer Andrychowicz et al. (2016) which we refer to as “Original”, RNNProp (Lv et al., 2017), a hierarchical architecture presented by “Learned Optimizers that Scale and Generalize” (Wichrowska et al., 2017) which we refer to as “Scale”, and the best setup from Chen et al. (2020a) which shorten as “Stronger Baselines.”
Training and Evaluation Details The RNNProp amalgamation target was trained using truncated backpropagation though time with a constant truncation length of 100 steps and total unroll of up to 1000 steps and meta-optimized by Adam with a learning rate of 1 × 10−3. For our training process, we also apply random scaling (Lv et al., 2017) and curriculum learning (Chen et al., 2020a); more details about amalgamation training are provided in Appendix C.3. Amalgamation takes up to 6.35 hours for optimal choice amalgamation using the large pool and up to 10.53 hours when using adversarial perturbations; a full report of training times is provided in Appendix C.4.
For each optimizer amalgamation configuration tested, we independently trained 8 replicate optimizers. Then, each replicate was evaluated 10 times on each evaluation problem, and trained to a depth of 25 epochs each time. Finally, we measure the stability of amalgamated optimizers by defining three notions of stability for meta-trained optimizers: ¶ Optimization stability: the stability of the optimizee during the optimization process. Viewing stability of the validation loss as a proxy for model stability with respect to the true data distribution, we measure the epoch-to-epoch variance of the validation loss after subtracting a smoothed validation loss curve (using a Gaussian filter). · Evaluation stability: the variance of optimizer performance across multiple evaluations. We find that the evaluation stability is roughly the same for all optimizers (Appendix E.1). · Meta-stability: the stability of the amalgamation process, i.e. the variance of amalgamation replicates after correcting for evaluation variance. Meta-stability and evaluation stability are jointly estimated using a linear mixed effects model. The stability is reported as a standard deviation. More details are in Appendix D.
5.1 OPTIMIZER AMALGAMATION
Amalgamation Methods Figure 1 compares the mean performance of the three amalgamation methods with the small pool and Choice amalgamation with the large pool. Mean and min-max amalgamation were not performed on the large pool due to memory constraints. The amalgamated optimizers using optimal choice amalgamation perform better than Mean and Min-Max amalgamation. The size of the optimizer pool does not appear to have a significant effect in Optimal Choice amalgamation, with small pool and large pool amalgamated optimizers obtaining similar results.
3.50 3.45 3.40 3.35 3.30 Best Log Validation Loss
0.06
0.07
0.08
0.09
0.10
0.11
0.12
Op tim
iza tio
n St
ab ilit
y
Amalgamated Adam RMSProp SGD Momentum AddSign PowerSign
pool (Figure 5). Comparing analytical optimizers, we observe a general inverse relationship between optimization performance and optimization stability: in order to achieve better optimization, an optimizer typically sacrifices some optimization stability in order to move faster through the optimizee weight space. By integrating problem-specific knowledge, the amalgamated optimizer is able to combine the best optimization performance and optimization stability characteristics (Figure 4).
5.2 STABILITY-AWARE OPTIMIZER AMALGAMATION
Input Perturbation While we also tested perturbing the inputs of the optimizer during amalgamation, we were unable to improve stability. These experiments are included in Appendix E.4.
Random Perturbation Min-max amalgamation was trained on the small optimizer pool with random perturbation relative magnitudes of ε = {5 × 10−4, 10−3, 2 × 10−3, 5 × 10−3, 10−2}. ε = 10−1 was also tested, but all replicates tested diverged and are not reported here.
Comparing perturbed amalgamation against the nonperturbed baseline (ε = 0), we observe that perturbations increase meta-stability up to about ε = 10−3 (Figure 6). For larger perturbation magnitudes, meta-stability begins to decrease as the perturbation magnitude overwhelms the weight “signal,” eventually causing the training process to completely collapse for larger perturbation values. While the stability with random perturbation ε = 10−2 is better than 10−3, this is likely due to random chance, since we use a small sample size of 8 replicates.
Adversarial Perturbation Since adversarial perturbation is more computationally expensive than random perturbations, min-max amalgamation was tested on a coarser grid of relative magnitudes ε = {10−4, 10−3, 10−2}, and to an adversarial attack depth of 1 step. These results are also reported in Figure 6, with ε = 10−2 omitted since all replicates diverged during training.
From our results, we observe that adversarial perturbations are about as effective as random perturbations. We also observe that the maximum perturbation magnitude that the amalgamation process can tolerate is much smaller for adversarial perturbations compared to random perturbations, likely because adversarial perturbations are much “stronger.” Due to the significantly larger training cost of adversarial perturbations, we recommend random perturbations for future work.
Application to Other Methods Random and Adversarial perturbations can be applied to any gradient-based optimizer meta-training method, including all of our baselines. An experiment applying Gaussian perturbations to the RNNProp baseline can be found in Appendix E.5.
6 CONCLUSION
We define the problem of optimizer amalgamation, which we hope can inspire better and faster optimizers for researchers and practitioners. In this paper, we provide a procedure for optimizer amalgamation, including differentiable optimizer amalgamation mechanisms and amalgamation stability techniques. Then, we evaluate our problem on different datasets, architectures, and training settings to benchmark the strengths and weaknesses of our amalgamated optimizer. In the future, we hope to bring improve the generalizability of amalgamated optimizers to even more distant problems.
A ALGORITHMS
In this section, we provide a detailed description of the key algorithms used in our paper.
Truncated Back-propagation: Algorithm 1 shows truncated back-propagation applied to optimizer amalgamation. For an unrolling length N , N data points (batches, in the case of mini-batch SGD) are sampled, which are split into N/t truncations of length t. Note that this requires N to be divisible by t; in our implementation, we require t and N/t to be specified as integers. For each truncation, the optimizee and teachers are trained for t iterations, and meta-gradients are computed over that truncation and applied.
Adversarial Weight Perturbation: Algorithm 2 shows adversarial perturbations applied to optimizer amalgamation. For each adversarial attack step, meta-gradients are taken with respect to the parameters, and are normalized for each tensor with respect to its tensor norm before being applied as an adversarial perturbation.
Algorithm 1: Distillation by Truncated Back-propagation Inputs :Amalgamation loss La
Policy P with parameters φ Teacher policies T = T1, . . . T|T | OptimizeeM,X , θ0 Unrolling, truncation lengths N , t
Outputs :Updated policy parameters φ Sample N data points x1, . . .xN fromX . θ (P ) 0 = θ (T1) 0 = . . . = θ (T|T |) 0 = θ0 for i = 1, 2, . . . N/t do for j = 1, 2, . . . t do
n = (i− 1)t+ j Update optimizee for P : θ (P ) n+1 ← θ (P ) n −P [ ∇M(xn, θ(P )n ) ] Update optimizees for each teacher: for k = 1, . . . |T | do
θ (Tk) n+1 ← θ (Tk) n − Tk [ ∇M(xn, θ(Tk)n ) ] end
end Compute distillation loss: Li ← La(x[(i−1)t:it],θ[(i−1)t:it];φ) Update φ using∇Li
end
Algorithm 2: Adversarial Weight Perturbation for Truncated Back-propagation Inputs :Truncated back-propagation
parameters La, P , φ, T ,M,X , θ0, N , t Adversarial attack steps A
Outputs :Updated policy parameters φ Sample N data points and initialize
optimizee parameters for i = 1, 2, . . . N/t do
ψ0 ← 0 for a = 1, 2, . . . A do
Compute trajectories θ[(i−1)t:it] for P and T L(a)i ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψa) for each weight tensor w do
γ ← ∇ ψ (w) i L(a)i
||∇ ψ (w) i L(a)i ||2
ψ(w)a ← ψa−1 + ε||φ||2γ. end
end Li ← La(
x[(i−1)t:it],θ[(i−1)t:it];φ+ ψA) Update φ using∇Li
end
B OPTIMIZEE DETAILS
Table 1 shows a summary of the training problems used. While all training is performed on a 2-layer CNN on MNIST, we evaluated our optimizer on 4 different datasets described in (B.1) and 5 different architectures (described in B.2). We also experiment with different training settings, which are described in B.3.
B.1 DATASETS
All datasets used are classification datasets, with cross entropy used as the training loss. The MNIST dataset (LeCun & Cortes, 2010) is used during training; the other datasets are, from most to least similar, are:
Sample images from these datasets are shown in Figure 7. All datasets were accessed using TensorFlow Datasets and have a CC-BY 4.0 license.
B.2 ARCHITECTURES
The Train convolutional network (Table 2a) has one convolution layer with 16 3x3 filters and one convolution layer with 32 5x5 filters. Each convolution layer uses ReLU activation, has stride 1x1, and is followed by a max pooling layer with size 2x2. Finally, a fully connected softmax layer is used at the output.
The four architectures evaluated are:
1. MLP: a 2-layer MLP with 20 hidden units and sigmoid activation 2. Wider: a modified version of Train with double width on each layer (Table 2b) 3. Deeper: a deeper network with 5 convolutional layers instead of 2 (Table 2c), again using
ReLU activation and 1x1 stride 4. ResNet: a 28-layer ResNet (He et al., 2015) (Table 2d)
B.3 OPTIMIZEE TRAINING
During training, a batch size of 128 is used except for the Small Batch evaluation, which has a batch size of 32. During training and evaluation, datasets are reshuffled each iteration.
To match the warmup process used in meta-training, warmup is also applied during evaluation. The SGD learning rate is fixed at 0.01, which is a very conservative learning rate which does not optimize quickly, but is largely guaranteed to avoid divergent behavior.
For differentially private training, we implement differentially private SGD (Abadi et al., 2016). In differentially private SGD, gradients are first clipped to a fixed l2 norm ε on a per-sample basis; then, gaussian noise with standard deviation σε where σ > 1 is added to the aggregated batch gradients. In our experiments, we use clipping norm ε = 1.0 and noise ratio σ = 1.1. Both MNIST and KMNIST are used as training sets in order to simulate transfer from a non-private dataset (MNIST) used for meta-training to a private dataset (KMNIST).
C AMALGAMATION DETAILS
C.1 OPTIMIZER ARCHITECTURES
In this section, we provide the exact architecture specifications and hyperparameters of our amalgamated optimizer along with other training details and training time. Our implementation is open source, and can be found here: http://github.com/VITA-Group/OptimizerAmalgamation.
C.1.1 RNNPROP ARCHITECTURE
For our amalgamation target, we use RNNProp architecture described by Lv et al. (2017). For each parameter on each time step, this architecture takes as inputs RMSProp update g/ √ v̂ and Adam update m̂/ √ v̂ using momentum (m̂) decay parameter β1 = 0.9 and variance (v̂) decay parameter β2 = 0.999, matching the values used for our analytical optimizers. These values pass through a 2-layer LSTM with tanh activation, sigmoid recurrent activation, and 20 units per layer. The output of this 2-layer LSTM is passed through a final fully connected layer with tanh activation to produce a scalar final update for each parameter.
C.1.2 CHOICE NETWORK ARCHITECTURE
Our Choice network for Optimal Choice Amalgamation is a modified RNNProp architecture. The update steps for each analytical optimizer are given as inputs to the same 2-layer LSTM used in RNNProp. Additionally, the current time step and tensor number of dimensions are provided, with the number of dimensions being encoded as a one-hot vector.
Then, instead of directly using the output of a fully connected layer as the update, LSTM output passes through a fully connected layer with one output per optimizer in the pool. This fully connected layer has a softmax activation, and is used as weights to combine the analytical optimizer updates.
C.2 OPTIMIZER POOL
We consider six optimizers as teachers in this paper: Adam, RMSProp, SGD, Momentum, AddSign, and PowerSign. These optimizers are summarized in table 3.
Table 3: Optimizer pool update rules; all updates include an additional learning rate hyperparameter.
Optimizer Update Rule SGD g
Momentum m̂ RMSProp g/ √ v̂
Adam m̂/ √ v̂
AddSign g(1 + sign(m̂)sign(g)) PowerSign g exp(sign(m̂)sign(g))
Joining the popular hand-crafted optimizers Adam, RMSProp, SGD, and Momentum, AddSign and PowerSign are two optimizers discovered by neural optimizer search (Bello et al., 2017). These two optimizers share the design principle that update steps should be larger when the momentum and gradient are in agreement:
AddSign ∝ g(1 + sign(m̂)sign(g)) PowerSign ∝ g exp(sign(m̂)sign(g)). (7)
Here, g represents the gradient, m̂ an exponential moving average of the gradient. In order to use AddSign and Powersign as teachers for gradient-based distillation, we modify them to be differentiable by replacing the sign function with a scaled tanh with magnitudes normalized by √ v̂:
sign(m̂)sign(g) ≈ tanh(m̂/ √ v̂)tanh(g/ √ v̂) (8)
By dividing by √ v̂, we provide a consistent magnitude to the tanh function so that sign agreement mechanism is not affected by overall gradient magnitudes.
For all optimizers, the momentum decay parameter is set to β1 = 0.9, the variance decay parameter is set to β2 = 0.999, and the learning rate multiplier is found by grid search on the Train optimizee over a grid of {5× 10−4, 1× 10−3, 2× 10−3, . . . 1}.
C.3 ADDITIONAL TRAINING DETAILS
During amalgamation, we apply a number of techniques from previous Learning to Optimize literature in order to boost training:
• Curriculum Learning: We apply curriculum learning (Chen et al., 2020a) to progressively increase the unrolling steps across a maximum of 4 stages with length 100, 200, 500, and 1000. During curriculum learning, checkpoints are saved and validated every 40 “meta-epochs,” which refers to a single optimizee trajectory trained with truncated back-propagation.
• Random Scaling: We apply random scaling (Lv et al., 2017) to reduce overfitting to the gradient magnitudes of the training problem. This random scaling is only applied to the amalgamation target; amalgamation teachers receive “clean” (unscaled) gradients.
• Warmup: Instead of initializing each training optimizee with random weights, we first apply 100 steps of SGD optimization as a “warmup” to avoid the turbulent initial phase of optimizing neural networks. A SGD learning rate of 0.01 is used during this period, and was chosen to be very conservative on all optimizees tested.
These techniques are also applied to all of our baselines, except for Random Scaling is only applied to baselines using the RNNProp architecture since we find that it harms the performance of other optimizer architectures.
C.4 TRAINING COST
Table 4 provides a summary of the training costs for each amalgamation method and baseline are provided. For optimal choice amalgamation, this includes both training the optimal choice optimizer and amalgamation training. All values are reported as the mean across 8 replicates.
All experiments were run on single nodes with 4x Nvidia 1080ti GPUs, providing us with a metabatch size of 4 simultaneous optimizations. In order to replicate our results, GPUs with at least 11GB of memory are required, though less memory can be used if the truncation length for truncated back-propagation is reduced.
D STABILITY DEFINITIONS
In this section, we provide the mathematical definition and measurement details of meta-stability, evaluation stability, and optimization stability.
D.1 META-STABILITY AND EVALUATION STABILITY
In order to quantify meta-stability and evaluation stability, we first summarize the performance of each evaluation using the best validation loss obtained and the training loss of the last epoch. Then, we model the best validation loss Y (val)ij and final training loss Y (train) ij for replicate i and evaluation j with the linear mixed effect model
Yij = µ+ αi + εij , (9)
where µ is the true mean, αi are IID random variables representing the meta-stability of the amalgamated optimizer, and εij are IID random variables representing the evaluation stability of each replicate. The meta-stability and evaluation stability are then quantified by standard deviations σα and σε.
D.2 OPTIMIZATION STABILITY
To measure optimization stability, we model the validation loss Lij(t) at epoch t for replicate i and evaluation j as
Lij(t) = βij(t) + η (t) ij (10)
for smooth function βij(t) which represents the behavior of the evaluation and random variable η (t) ij which captures the optimization stability; we assume that η(t)ij is IID with respect to t.
In order to estimate ση , we first estimate βij(t) by applying a Gaussian filter with standard deviation σ = 2 (epochs) and filter edge mode “nearest,” and σ̂(ij)η is calculated accordingly. Finally, σ (ij) η is treated as a summary statistic for each evaluation, and the mixed effect model described previously (Equation 9) is fit to obtain a final confidence interval for the mean optimization stability.
E ADDITIONAL RESULTS
E.1 EVALUATION STABILITY
Table 5 summarizes the evaluation stability of analytical and amalgamated optimizers. All optimizers obtain similar evaluation stability, except for cases where an optimizer cannot reliably train the optimizee at all such as Momentum, AddSign, and PowerSign on the Deeper CNN. In these cases, the optimizer consistently learns a constant or random classifier, which results in very low variance and high “stability.”
E.2 LARGER EVALUATIONS
In order to explore the limits of our optimizer, we evaluated the amalgamated optimizer with a 52-layer ResNet (763882 parameters — 40x the train network size), and the same 52-layer ResNet on CIFAR-100 instead of CIFAR-10. These results are compared to CIFAR-10 on a 2-layer network and CIFAR-10 on a 28-layer ResNet using Adam as a single baseline (Figure 8).
While our amalgamated optimizer has significant performance advantages on the shallow CIFAR-10 network in our original evaluations and achieves performance parity in the 28-layer ResNet, the amalgamated optimizer can no longer perform as well as the oracle once we reach 52 layers and change to CIFAR-100.
E.3 ADDITIONAL PLOTS
In this section, we include plots providing alternate versions of Figures 2, 3, and 5 in the main text which had some outliers cropped out in order to improve readability.
E.4 INPUT PERTURBATION
When applying input perturbations, we perturb the inputs to the optimizer, or the optimizee gradients, instead of the optimizer weights:
θi+1 = θi − P (∇θiM(xi, θi) +N (0, σ2I)). (11)
We tested magnitudes σ = 10−1 and σ = 10−2 on a smaller experiment size of 6 replicates using Choice amalgamation on the small pool as a baseline; these results are given in Table 6. Many experiment variants remain relating to input noise such as adding noise proportional to parameter norm or gradient norm or trying smaller magnitudes of noise, and this may be a potential area of future study. However, we believe that input noise is generally not helpful to optimizer amalgamation, and did not study it further.
E.5 BASELINES WITH RANDOM PERTURBATION
Our perturbation methods can be applied to any gradient-based optimizer meta-training method, including all of our baselines. To demonstrate this application, we trained 8 RNNProp replicates with Gaussian perturbations with magnitude 1× 10−4; all other settings were identical to the RNNProp baseline. With perturbations, the RNNProp baseline is significantly improved, though not enough to match the performance of our amalgamation method. | 1. What is the main contribution of the paper, and how does it attempt to improve the stability and effectiveness of learned optimizers?
2. What are the strengths and weaknesses of the proposed approach, particularly regarding its ability to compare with analytical optimizers and L2O baselines?
3. Do you have any questions or concerns about the definition and use of certain terms, such as "stability" and "meta-stability"?
4. How does the reviewer assess the effectiveness and practicality of the proposed method, especially when compared to imitation learning and other optimization techniques?
5. Are there any suggestions or ideas that the reviewer has for improving the proposed method, such as incorporating additional imitation learning baselines or exploring different types of perturbations? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposed a new problem called Optimizer Amalgamation and made an attempt to obtain a more powerful learned optimizer from several analytical optimizers. More concretely, three amalgamation losses are designed to train the amalgamated optimizer. In the meanwhile, two types of noise, random gaussian perturbation and projected gradient descent are incorporated into the training objective to increase the stability of the optimizer. The evaluation part compared the amalgamated optimizer with its original teachers, i.e., those analytical optimizers, and some L2O baselines on image classification tasks.
Review
Strengths:
This paper is mostly clear and organized, except for some points mentioned in Weaknesses below.
The paper made an interesting attempt to distill the knowledge from analytical optimizers with three amalgamation methods and conducted comprehensive experiments to show the effectiveness of the amalgamated optimizer.
Weaknesses:
There are some places that the paper did not describe the content clearly, especially for the definition of stability. When to use
Y
i
j
val
and
Y
i
j
train
in Eq. (9) was not discussed and thus it is confusing to me how to derive the meta-stability and evaluation stability.
In terms of both the validation loss and the validation accuracy, the amalgamated optimizer did not have obvious advantages over the oracle optimizer, as shown in Figure 4 and Figure 9, which makes the method not very practical.
Although the authors claimed that they tried to distill the knowledge from traditional optimizers, it is also related to imitation learning. However, there was no discussion about the different between knowledge distillation and imitation learning when trying to learn the optimization pattern from analytical optimizers. Some imitation baselines might be incorporated into the meta-loss as well.
I am wondering whether first initializing the learned optimizer via the proposed amalgamation methods or imitation learning methods and then training it with the meta loss will have better performance. The reason is that randomly initialized optimizer is hard to train at the beginning and might disturb the whole training trajectory.
It seems that adversarial perturbation performed similarly as random perturbation. Since adversarial perturbation requires more computing resources, then why not just use random perturbation? Are there any other advantages of adversarial perturbation?
I am also interested in the performance of the choice optimizer C. Maybe the authors can expand on it, such as comparing C with the analytical optimizers it learns from.
I don't quite understand the choice of l2-log loss. Is there any related paper using this type of distillation loss? Maybe the authors can try the gradient matching loss in [1], which is shown to be effective for dataset distillation and works with a similar number of parameters.
The time and memory cost for training the amalgamated optimizer is not reported, which is important for the practicality of the method.
[1] Zhao, B., Mopuri, K. R., & Bilen, H. (2020). Dataset condensation with gradient matching. https://openreview.net/forum?id=mSAKhLYLSsl |
ICLR | Title
Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Abstract
Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as “mechanical intelligence” or “morphological computation”. In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectures— which we refer to as neuromechanical autoencoders—we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials—cellular solids, in particular—as the morphological substrate. Just as deep neural networks provide flexible and massivelyparametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a “digital MNIST” task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.
1 INTRODUCTION
Mechanical intelligence, or morphological computation (Paul, 2006; Hauser et al., 2011), is the idea that the physical dynamics of an actuator may interact with a control system to effectively reduce the computational burden of solving the control task. Biological systems perform morphological computation in a variety of ways, from the compliance of digits in primate grasping (Jeannerod, 2009; Heinemann et al., 2015), to the natural frequencies of legged locomotion (Collins et al., 2005; Holmes et al., 2006; Ting & McKay, 2007), to dead fish being able to “swim” in vortices (Beal et al., 2006; Lauder et al., 2007; Eldredge & Pisani, 2008). Both early (Sims, 1994) and modern (Gupta et al., 2021) work have used artificial evolutionary methods to design mechanical intelligence, but it has remained difficult to design systems de novo that are comparable to biological systems that have evolved over millions of years. We ask:
Can we instead learn morphological computation using gradient descent?
Morphological computation requires that a physical system be capable of performing complex tasks using, e.g., elastic deformation. The mechanical system’s nonlinear properties work in tandem with neural information processing so that challenging motor tasks require less computation. To learn an artificial mechanically-intelligent system, we must therefore be able to parameterize a rich space of mechanisms with the capability of implementing nonlinear physical “functions” that connect input forces or displacements to the desired output behaviors. There are various desiderata for such a
mechanical design space: 1) it must contain a wide variety of structures with complex nonlinear elastic deformation patterns; 2) its parameters should be differentiable and of fixed cardinality; and 3) the designs should be easily realizable with standard manufacturing techniques and materials. These characteristics are achieved by mechanical metamaterials.
Metamaterials are structured materials that have properties unavailable from natural materials. Although metamaterials are often discussed in the context of electromagnetic phenomena, there is substantial interest in the development of mechanical metamaterials in which geometric heterogeneity achieves unusual macroscopic behavior such as a negative Poisson’s ratio (Bertoldi et al., 2010). In biological systems, morphological computation often takes the form of sophisticated nonlinear compliance and deformation, resulting in a physical system that is more robust and easier to control for a variety of tasks (Paul, 2006; Hauser et al., 2011), This type of behavior is typically not present in off-the-shelf robotic systems and is difficult to design a priori. Mechanical metamaterials, on the other hand, offer a platform for mechanically-intelligent systems using relatively accessible manufacturing techniques, such as 3-D printing.
The mechanical metamaterials we explore in this paper are cellular solids: porous structures where different patterns of macroscopic pores can lead to different nonlinear deformation behaviors. By constructing a solid with a large number of such pores, and then parameterizing the pore shapes nonuniformly across the solid, it is possible to achieve a large design space of nonlinear mechanical structures while nevertheless having a differentiable representation of fixed cardinality. The key to modern machine learning has been the development of massively-parametric composable function approximators in the form of deep neural networks; cellular solids provide a natural physical analog and—as we show in this work—can also be learned with automatic differentiation.
To make progress towards the goal of learnable morphological computation, in this paper we combine metamaterials with deep neural networks into a framework we refer to as a neuromechanical autoencoder (NMA). While traditional mechanical metamaterials are designed for single tasks and actuations, here we propose designs that can solve problems drawn from a distribution over tasks, using a neural network to determine the appropriate actuations. The neural network “encoder” consumes a representation of the task—in this case, achieving a particular deformation—and nonlinearly transforms this into a set of linear actuations which play the role of the latent encoding. These actuations then displace the boundaries of the mechanical metamaterial inducing another nonlinear transformation due to the complex learned geometry of the pores; the resulting deformation corresponds to the “decoder”. By using a differentiable simulator of cellular solids we are able to learn in an end-to-end way both the neural network parameters and the pore shapes so that they can work in tandem. The resulting system exhibits morphological computation in that it learns to split the processing task across the neural network and the physical mechanism.
The paper is structured as follows. We first introduce the abstract setup for the neuromechanical autoencoder, followed by a brief description of our mechanics model, geometry representation, and differentiable simulation. Although important for the success of our method, the details of our dis-
cretization and solver for computational nonlinear elasticity problems are in the appendix. We then describe and detail results of our experiments, which include mechanical tasks, a shape matching experiment, and a new mechanical twist on MNIST classification. We end with related work and a discussion on future steps.
2 METHODS
2.1 NEUROMECHANICAL AUTOENCODER SETUP
We describe the overall setup as pictured in Figure 1. We begin by considering a bounded domain D ⊂ R2 (often square) on which our material exists (in this work we only consider the actuation of 2D geometries). When actuations are applied on the material, its deformation can be described by a displacement field u : D → R2, where u(X) represents the displacement of the particle originally at coordinate X ∈ D. For simplicity, assume u can be discretized and identified by a finite-dimensional vector q ∈ RN . The exact form of the discretization is based on a finite element method variant and is described in the appendix in Section A.1.
Next we specify a distribution of tasks T and an associated loss function L(q; ti) : RN × Rm → R, which depends on a task descriptor ti ∼ T , ti ∈ Rm, and a displacement field specified by q. The loss function often only looks at the deformation of a subset of the material, such as the displacement of a single point, but we are not restricted to this. The task descriptor is meant to be generic: it can be a coordinate, an image, a scalar parameter, etc.
To map task descriptors to displacements, we use a neural encoder Eθ : Rm → Rk and a mechanical decoder Dϕ : Rk → RN . The output of the encoder at ti is understood to be the latent dimension of the autoencoder, and represents the actuations to the mechanical structure. The goal is to choose parameters {θ, ϕ} to minimize the loss over the distribution of tasks:
θ∗, ϕ∗ = argmin θ,ϕ Et∼T [L(Dϕ(Eθ(t)); t)] .
Given ∇ϕL(Dϕ(Eθ(t)); t) and ∇θL(Dϕ(Eθ(t)); t) for t ∼ T , we can optimize the objective with standard first-order stochastic gradient methods. One difficulty is that Dϕ(·) is an implicit function of its inputs, computed by solving a partial differential equation (PDE). Furthermore, ϕ represents geometric parameters defining the domain on which the PDE is solved. To effectively compute derivatives of Dϕ, we developed a JAX-based (Bradbury et al., 2018) differentiable elasticity simulator, as described in the next section.
2.2 DIFFERENTIABLE SIMULATION
We developed a custom solver for static nonlinear elasticity problems which model the equilibrium of elastic materials under load. The goal is to have a robust and end-to-end differentiable simulator for 2D neuromechanical autoencoders based on mechanical metamaterials. Given geometric design parameters, our solver simulates the structure described by the parameters and computes the gradient (adjoint) with respect to both geometric design parameters and boundary conditions (actuations).
In order to make the solver differentiable with respect to geometric parameters, we implement a version of isogeometric analysis (IGA) (Hughes et al., 2005), a finite element method (FEM) (Hughes, 2012) variant where both the underlying solution and geometry basis are based on B-splines. Using B-spline patches allows us to parameterize our geometry in a flexible and yet robust way while maintaining a differentiable map from geometry parameters to PDE solution.
As our simulator is implemented entirely in JAX, we backpropagate gradients directly through both the simulator and a neural network using automatic differentiation and adjoint methods in tandem. In the next sections, we describe the relevant physics and the geometric representation we used.
2.3 MECHANICAL MODEL
We give a high level description of the mechanical model here, and detail it further in the appendix. In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy. The elastic properties are captured by a hyperelastic strain energy density function, which depends on the local deformation of the material and is independent of the path of deformation. For a given deformation, the potential energy functional Ψ(u) is the integral of the strain energy density over the material domain D. Given boundary conditions, the resulting physical deformation u : D → R2 is one that minimizes Ψ(u) subject to boundary conditions:
u∗ = argmin u∈H Ψ(u)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. NMA training is bi-level, where in the inner loop we perform the energy minimization using second-order methods. In the outer loop, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions, and gradients with respect to these can be computed using implicit differentiation.
2.4 GEOMETRY REPRESENTATION
The central unit of the metamaterials we design is the cell, a porous shape with a quadrilateral boundary. We initialize the geometry to a regular grid of square cells with simple square pore shapes, similar to that in Figure 2b. During training of the neuromechanical autoencoder, we modify this geometry to minimize the expected loss over a distribution of tasks.
To represent the geometry, we decompose the domain into B-spline patches, each with its own B-spline control points. Each cell is generally composed of four patches; we visualize the decomposition of a representative cell in Figure 2a. The shape of the cell pore is defined by radii (illustrated by ri), whose values specify relative distance of the pore edge from the centroid of the cell (e.g., a cell having radii all 0.0 corresponds to a completely closed cell). We combine all the radii in all cells into a radii array r ∈ [0, 1]R, which becomes one of our geometric parameters. For further flexibility, we also allow the shapes of the cells to change within a grid of cells. The corners of the cells, labeled ci in Figure 2b, are allowed to deviate within a specific box around its values in the initial square lattice-like geometry. Figure 2a shows a cell that its corners perturbed during training. The deviation bound ensures that the shapes do not degenerate during NMA training. The array of corner locations c and radii r comprise our geometric parameters. The outer boundary of the structure is constrained not to change during NMA optimization, as this would otherwise create inconsistent boundary conditions between designs.
2.5 DISCRETIZATION AND END-TO-END DIFFERENTIABILITY
The details of our discretization are in the Appendix. We mention two important notes here. The first is that careful selection of geometric parameters is critical to being able to differentiate with respect to them. In particular, given the geometric parameters we can construct a differentiable map to the B-spline control points representing the geometry of the model. The analogy in standard FEM would be that our “meshing” operation is fully differentiable. Part of the reason differentiability is always satisfied is that the cardinality and topology of the control points remain the same given any valid setting of geometric parameters. Another important note is that all of our geometric parameters, {r, c}, are only constrained by simple box constraints, so that first order constrained optimization with them is straightforward. This parameterization is robust in the sense that for any value of the geometric parameters within the box constraints, we have a valid geometry.
3 EXPERIMENTS
3.1 TRANSLATION AND ROTATION
The first task we tackle is how to perform translation given a limited degree of control. Consider the setup in Figure 3. We have a 5× 5 cellular solid fixed on the corners. The goal is to be able to move the green pointer in the middle of the solid to anywhere within the blue square given only horizontal displacements of the edges. The space of tasks is a small box B, and the task is defined by a single coordinate; the task descriptor t ∈ R2 is sampled uniformly from B. The task descriptor is mapped to two horizontal actuations by a simple fully-connected neural network (NN). The loss function is mean squared error of the green pointer after deformation from the goal.
In the absence of a domain-specific design, the allowed actuations limit the achievable motions to only left-right displacements. As demonstrated in Figure 3 (right), given a red goal point on the bottom right the best we can do with the unoptimized geometry is match the x-coordinate. If we do end-to-end NMA training, however, we converge on the shape in Figure 4. The learned geometry is able to achieve any translation within the blue square. The diagonal pore shapes enable translating compression of the material to downwards motion of the pointer, and tension to upwards motion. Here the joint learning of geometry and control offers a clear benefit: we converge to a nontrivial solution that discovers how to use its geometric nonlinearity with its NN controller.
To demonstrate transfer to the real world, we manufactured the resulting structure and qualitatively verified its behavior. As predicted by our simulation the neuromechanical autoencoder can facilitate displacement of a central point in the upwards via tension, downwards via compression, and diagonally via partial compression. See Figure 5 for details.
The next task is another mechanical task: rotation. We would like to see how the NN and metamaterial can work together to learn to translate linear actuation into rotation of part of the structure. The task description is now the angle of rotation: t ∈ [−π, π]. We have two setups for this problem. In the first, we use a small fully-connected NN to map angle into a single actuation, applied equally on both sides of the metamaterial (Figure 6). We apply actuations on both sides to avoid translating the middle square in addition to rotation. In the second, the NN maps into two actuations, one applied left-right and one applied top-down (Figure 7). In both cases we consider a 7× 7 metamaterial. The goal is rotation of the blue stick counter-clockwise by an angle t around the center; the loss function is mean square error from the goal of the two points on the ends of the stick.
In the first setup, we are able to achieve unidirectional rotation between [0, π/4] (Figure 6, right), while for the second setup, we can achieve bi-directional rotations between [−π/6, π/6] (Figure 7). Without geometry and control co-design, we would not be able to achieve rotation with linear actuation without an intuition-driven design, but the joint NMA training is able to make good progress.
3.2 SHAPE MATCHING
Next we consider a much higher-dimensional task space. Given a family of shapes parameterized by 2-dimensional coordinates, we would like to design a mechanical decoder and a neural encoder that can map coordinates to actuations deforming the structure to resemble a sampled shape from the family as closely as possible.
In particular, we consider a family generated by a log Gaussian process in polar coordinates, approximated via random Fourier features (Rahimi & Recht, 2007). Given a metamaterial with a large central pore, such as in Figure 8a, we would like to deform it to match any of the shapes in the
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
family. The task description t is an n × 2 dimensional array of coordinates defining the shape. A fully-connected neural network translates these to 12 actuations applied around the material (as shown in Figure 8a). The final loss function is an ℓ1 loss between the control points defining the middle pore after deformation and the points defining the shape. When comparing, we normalize the scale of both shapes, and perform Procrustes analysis for rotation invariance.
We train one version where the geometry and neural network are optimized jointly, and one version where only the neural network is optimized for the starting geometry. Figure 8a shows the nonoptimized and optimized geometry. After learning, Figure 9 shows qualitative results of how well the jointly learned metamaterial compares with the control-only material. Jointly learning geometry for the shape family allows us to capture much finer features in the target shape. In Figure 8b
we visualize the (stochastic) loss during training. The jointly learned metamaterial converges to a significantly lower loss value, showing the benefit of harnessing the geometric nonlinearity.
3.3 DIGITAL MNIST
For our last task we attempt to create a mechanical seven-segment display for classifying MNIST digits. Towards this we add an additional design variable for the material: color. Our starting metamaterial is pictured in Figure 10a, a version of metamaterial that is originally assigned a color value 0 everywhere. We treat color, parameterized by a B-spline patch over the metamaterial, as an additional geometric design parameter that can be optimized with NMA training.
Our input to the neural network is an image sampled from the MNIST dataset. The neural network then produces actuations that deform the metamaterial to produce a seven-segment representation of the MNIST digit when viewed through small slits. Figure 10b visualizes the learned colop map, and Figure 10c shows the structure with slits added. The loss function is manually specified for each digit, e.g. if an MNIST digit has a label of “1” then the right two slits should contain color value 1.0, while the rest should contain 0.0. The full setup is displayed in Figure 11, with samples after training displayed in Figure 12. Although this can be learned from scratch end-to-end, to speed up training we first learned colors and actuations to be able to reproduce all 10 digits, and then trained
a small feed-forward neural network to match the actuations for each digit. We then set up the entire pipeline and finetuned end-to-end for better performance. We note that the 7 segments are controlled by only 6 actuations, so by restricting the family of objects displayed to the 10 digits, we allow the neural encoder and mechanical decoder to learn underactuated control of all 7 segments. Additional samples are presented in the Appendix. We also note that the pore shapes did not have to change significantly to accomplish this task. The only “geometry” design was through the coloring, which as visualized in Figure 10b turns out to be highly nontrivial.
4 DISCUSSION AND RELATED WORK
Differentiable Simulation The abundance of differentiable simulators has demonstrated their usefulness in designing novel systems. Hu et al. (2019) developed a differentiable simulator with hand-written custom CUDA physics kernels that enabled material inference, control of a soft walker, and co-design of a swinging robot arm. Sanchez-Gonzalez et al. (2020) presents an ML framework to model a variety of physical domains to solve forward and inverse problems using a graph neural network approach. Mozaffar & Cao (2021) developed a differentiable finite element simulator to control and infer material parameters within the context of additive manufacturing processes. Liang et al. (2019) developed a differentiable cloth simulator, and Ham et al. (2019) automated the calculation of weak shape derivatives within the context of finite elements to solve PDE constrained shape optimization problems. In our work, our differentiable simulator is developed specifically to aid in neuromechanical autoencoder design.
Mechanical Metamaterials As the rational design of nonlinear mechanical materials is often unintuitive, modern machine learning approaches have enabled faster design. Deng et al. (2022) coupled a neural accelerated mass spring model that facilitated an evolutionary approach to design functional structures. Mao et al. (2020) applied generative adversarial networks to design unit cells for architected metamaterials, Kumar et al. (2020) introduces a novel class of anisotropic meta-
materials and a machine learning method for the inverse design of their geometry given desired elasticity properties, Beatson et al. (2020) learned a reduced order model to speed up simulation of cellular metamaterials, Xue et al. (2020) introduced a homogenization approach for cellular metamaterials, and Xue & Mao (2022) introduced a mapped shape approach to design metamaterials to fit a prescribed strain energy curve. Our work uses classical gradient/adjoint methods to optimize the geometric parameters, but could be combined with machine learning methods to speed up simulation and hence faster NMA training.
4.1 LIMITATIONS AND FUTURE WORK
We introduce the framework of neuromechanical autoencoders, inspired by the biological coevolution of control and morphology. We present a method for automatic design of these systems, and show a number of results that produce nontrivial behavior through co-design, both in simulation and in real-world. We believe this is a small but significant step in the road to designing mechanically-intelligent systems. The two major bottlenecks in our approach is the runtime of PDE solving and geometry parameterization. Fast PDE solving, especially for nonlinear PDEs such as the ones we use, is a very active area of research, and is crucial to scaling up NMA design. In terms of geometry parameterization, the key is to find a space of materials that have a complex range of mechanical deformation properties and yet are easy to simulate. For this paper, 2D cellular solids with nonuniform pore shapes were a great ansatz, but a future work could understand and quantify how much “computation” these materials can do. Scaling up to 3-dimensional intelligent mechanical models, as well as including dynamics, would significantly improve the computation capabilities, but would require much faster solvers. This is a key focus in our further work.
4.2 ACKNOWLEDGEMENTS
We would like to thank Alex Beatson, Geoffrey Roeder, Jordan Ash, and Tianju Xue for early conversations around this work. We also thank PT Brun for assisting with fabrication. This work was partially supported by NSF grants IIS-2007278 and OAC-2118201, the NSF under grant number 2127309 to the Computing Research Association for the CIFellows 2021 Project, and a Siemens PhD fellowship.
A APPENDIX
A.1 DISCRETIZATION AND IGA
We discretize the geometric domain into P isogeometric patches. All patches use the same Bspline basis functions Bij(ξ, η) Piegl & Tiller (1997) with i, j ∈ [np] and ξ, η ∈ [0, 1], where np is the number of control points. The domain of ξ and η correspond to the parent domain of each patch (Figure 13). In the B-spline literature, the parent domain is often referred to as the knot span. The basis functions are piecewise polynomial with degree specified as a parameter. In all of our experiments, we use piecewise quadratic B-spline functions. Each B-spline basis function corresponds to a control point, and each control point represents two degrees of freedom in 2D space, i.e., each patch p ∈ [P ] has np × np × 2 degrees of freedom (xijp and yijp ). The mapping from the parent domain of each patch to the physical domain is given by a linear combination of B-spline basis functions, where the weights of the linear combination are given by the control point coordinates. Explicitly, the mapping function ϕp from parent space of patch p to physical space is given by
ϕp(ξ, η;x,y) = np∑ i=1 np∑ j=1 [ xijp yijp ] Bij(ξ, η), (1)
where [xijp , y ij p ] are the control points parameterizing the mapping. In our simulator, we represent the reference configuration with reference control points [Xijp , Y ij p ]; these are determined by our geometry parameters {r, c} through a differentiable map. We then parameterize the deformed geometry with the same basis, using deformed control points [xijp , y ij p ]. For a given deformation, the integral in Eq 3 representing the potential energy can be computed by a pullback in the parent domain, using standard Gaussian quadrature as is standard in FEM (Hughes et al., 2005).
Dirichlet boundary conditions of the type we use in this paper can be represented as constraints on a subset of the control points. For each boundary condition, the corresponding reference and deformed control points are prescribed to have particular displacement values. Furthermore, since our geometry is decomposed into multiple neighboring patches, the control points must also have incidence constraints amongst them. These are kept track of using constraint groups, where each group has a representative element.
A.2 MECHANICAL MODEL
In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy function. In particular, we use a nearly incompressible Neo-Hookean material model (Ogden,
1997), in which the elastic properties are captured by a hyperelastic strain energy density function. This function, W (F),W : R2×2 → R, is independent of the path of deformation and is a function of the deformation gradient tensor, Fij = ∂ui/∂Xj + I where X ∈ D ⊂ R2 represents the position of a particle in the undeformed reference configuration, and u(X) is the displacement field. Here,
W (F) = µ
2 (I1 − 2− 2 log J) +
κ 2 (log J)2, (2)
where J = det(F), I1 = tr(FTF), and µ = E/2(1 + ν) and κ = E/3(1− 2ν) are shear and bulk moduli of a material with Young’s modulus E and Poisson’s ratio ν, respectively. This is a standard choice for hyperelastic material modeling that transfers well to the real-world. We can solve for the displacement by finding the stationary point of the potential energy functional Ψ(u),
u∗ = argmin u∈H Ψ(u) Ψ(u) = ∫ D W (F)dX = ∫ D W ( ∂u ∂X ∣∣∣∣ X=X′ + I ) dX′ (3)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. Abstractly, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions. The solution u∗ can be computed in a discretized form using standard second-order optimization algorithms, and gradients can be computed using implicit differentiation.
A.3 END-TO-END DIFFERENTIABILITY
We would like to reiterate that our differentiability conditions are satisfied, so that we can train neuromechanical autoencoders end-to-end. We first define a differentiable map from geometry parameters and Dirichlet boundary condition values into the reference B-spline control points. This is then used to construct the Πl,Πg functions, both of which are differentiable. The global vector q is then passed into a black-box optimizer to produce the solution q∗. Gradients with respect to the solution of the optimizer are computed using adjoint optimization. The solution q∗ is then mapped back into local coordinates using Πl, and is used to compute the loss function L. This pipeline ensures that we have a differentiable map from geometry parameters and boundary conditions (actuations) to the NMA task loss function, so we can proceed to train the NMA objective using stochastic gradient descent. In the next section, we demonstrate specific applications.
A.4 SOLVER DETAILS
After discretization to q, we solve the energy minimization using Newton’s method with incremental loading. The Hessian of the energy is assembled in sparse form using the trick from Powell & Toint (1979). Using the discretization of the system we automatically derive the sparsity pattern of the Hessian, and then construct appropriate binary vectors to perform Hessian-vector products with. We then reshape these into a CSR matrix representation of the Hessian.
The sparse linear systems are then solved by GMRES (Saad & Schultz, 1986) preconditioned by an incomplete LU decomposition. Since the energy involves log detF , where F is the deformation gradient, taking a finite step can lead to numerical blowup. Therefore our incremental loading is adaptive, and a line search is performed to avoid inversion of elements in the geometry.
A.5 EXPERIMENTAL DETAILS
All B-spline patches used were quadratic and contained 5× 5 control points. Quadrature was done by degree 5 Gauss-Legendre. Most computation was done on NVIDIA RTX 2080 GPUs. Each solve instance was done on a single GPU, and mini-batching was done by parallelizing with MPI. Each MPI task used a single GPU. The radii parameters were clipped to [0.1, 0.9] to aid solver stability.
A.5.1 TRANSLATION TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 2−30−30−10−2 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.2 ROTATION TASK
The learning rate was 0.001 ∗M for single and 0.01 ∗M for double rotation. M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 1 − 20 − 10 − 1 (including input/output) or 1 − 20 − 10 − 2 for the double actuation. The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.3 SHAPE MATCHING TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 16 MPI tasks. The neural network was a fully-connected network with activation sizes: 98−200−200−12 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.6 DIGITAL MNIST TASK
Initially we learned colors and actuations for a lookup table of 10 digits. This was trained with a learning rate of 0.01 ∗ M where M is the number of MPI tasks. We used 10 MPI tasks, one per digit. We clipped the maximum displacement to 60% of cell width using tanh. Afterwards, we trained a fully-connected neural network to map MNIST digits to the actuations of the corresponding digit. We then put the neural network to map directly to actuations, and finetuned end-to-end with a learning rate of 0.0001.
A.7 ADDITIONAL PORE MATCHING RESULTS
A.8 ADDITIONAL DIGITAL MNIST RESULTS | 1. What is the focus and contribution of the paper regarding meta material layouts and neural networks?
2. What are the strengths of the proposed model, particularly in its creativity and intersection with data science?
3. What are the weaknesses of the paper, such as the lack of technical details and potential optimization issues?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or concerns regarding the paper's demonstrations and potential applications? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
I thank the authors for their rebuttal replies and for further improving the quality of this paper. I think my score is still accurate and I look forward to seeing this work published.
The paper proposes a differentiable model for learning meta material layouts and neural network encoder that can produce actuations to control these materials. They show through a number of simulations that this works with simple translations, rotations and complex shapes. They also build this material and show that it works. Finally, they show how this combined framework can be trained to translate MNIST digits into 7 slits representations.
Strengths And Weaknesses
Strength: The paper is very well written, easy to follow and fun to read. Even for a reader unfamiliar with meta materials and the associated physics, they manage to write an engaging scientific exploration. The proposed model is very creative and works at an exciting intersection of material and data science.
Weaknesses: It would have been great to include a tiny bit more detail about technical things, for instance, how is the adjoins method implemented, which PDE solver is used etc. However, I do realise that this might come at the expense of making the paper less accessible.
Clarity, Quality, Novelty And Reproducibility
Clarity: Extremely high! Section 2.1 is a little difficult to digest at first read.
2.4 2nd paragraph, 2nd brackets ‘wel’?
I am a little surprised at the spiky shapes emerging in many of the shown optimised geometries. Is this an optimisation issue? Or is this some repeating, near optimal, solution (maybe one that also appears in nature after evolution)?
The MNIST experiments are really cool, although I am thinking that the encoder network probably just does the classification and then translates this into the actuation for the 7 slit representation. But even if this is a bit of a gimmick, it is still a very cool demonstration!
Quality: Very high, as far as I can tell. It might be nice to include a section with limitations and optimisation pitfalls, i.e., all the things you tried on the way that did not work – that would be helpful for other researchers to build on this work.
Novelty, I cannot really judge because I don’t work in this area of meta materials. However, for what it is worth, I have never seen a demonstration like this and think that it is really creative and novel. |
ICLR | Title
Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Abstract
Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as “mechanical intelligence” or “morphological computation”. In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectures— which we refer to as neuromechanical autoencoders—we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials—cellular solids, in particular—as the morphological substrate. Just as deep neural networks provide flexible and massivelyparametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a “digital MNIST” task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.
1 INTRODUCTION
Mechanical intelligence, or morphological computation (Paul, 2006; Hauser et al., 2011), is the idea that the physical dynamics of an actuator may interact with a control system to effectively reduce the computational burden of solving the control task. Biological systems perform morphological computation in a variety of ways, from the compliance of digits in primate grasping (Jeannerod, 2009; Heinemann et al., 2015), to the natural frequencies of legged locomotion (Collins et al., 2005; Holmes et al., 2006; Ting & McKay, 2007), to dead fish being able to “swim” in vortices (Beal et al., 2006; Lauder et al., 2007; Eldredge & Pisani, 2008). Both early (Sims, 1994) and modern (Gupta et al., 2021) work have used artificial evolutionary methods to design mechanical intelligence, but it has remained difficult to design systems de novo that are comparable to biological systems that have evolved over millions of years. We ask:
Can we instead learn morphological computation using gradient descent?
Morphological computation requires that a physical system be capable of performing complex tasks using, e.g., elastic deformation. The mechanical system’s nonlinear properties work in tandem with neural information processing so that challenging motor tasks require less computation. To learn an artificial mechanically-intelligent system, we must therefore be able to parameterize a rich space of mechanisms with the capability of implementing nonlinear physical “functions” that connect input forces or displacements to the desired output behaviors. There are various desiderata for such a
mechanical design space: 1) it must contain a wide variety of structures with complex nonlinear elastic deformation patterns; 2) its parameters should be differentiable and of fixed cardinality; and 3) the designs should be easily realizable with standard manufacturing techniques and materials. These characteristics are achieved by mechanical metamaterials.
Metamaterials are structured materials that have properties unavailable from natural materials. Although metamaterials are often discussed in the context of electromagnetic phenomena, there is substantial interest in the development of mechanical metamaterials in which geometric heterogeneity achieves unusual macroscopic behavior such as a negative Poisson’s ratio (Bertoldi et al., 2010). In biological systems, morphological computation often takes the form of sophisticated nonlinear compliance and deformation, resulting in a physical system that is more robust and easier to control for a variety of tasks (Paul, 2006; Hauser et al., 2011), This type of behavior is typically not present in off-the-shelf robotic systems and is difficult to design a priori. Mechanical metamaterials, on the other hand, offer a platform for mechanically-intelligent systems using relatively accessible manufacturing techniques, such as 3-D printing.
The mechanical metamaterials we explore in this paper are cellular solids: porous structures where different patterns of macroscopic pores can lead to different nonlinear deformation behaviors. By constructing a solid with a large number of such pores, and then parameterizing the pore shapes nonuniformly across the solid, it is possible to achieve a large design space of nonlinear mechanical structures while nevertheless having a differentiable representation of fixed cardinality. The key to modern machine learning has been the development of massively-parametric composable function approximators in the form of deep neural networks; cellular solids provide a natural physical analog and—as we show in this work—can also be learned with automatic differentiation.
To make progress towards the goal of learnable morphological computation, in this paper we combine metamaterials with deep neural networks into a framework we refer to as a neuromechanical autoencoder (NMA). While traditional mechanical metamaterials are designed for single tasks and actuations, here we propose designs that can solve problems drawn from a distribution over tasks, using a neural network to determine the appropriate actuations. The neural network “encoder” consumes a representation of the task—in this case, achieving a particular deformation—and nonlinearly transforms this into a set of linear actuations which play the role of the latent encoding. These actuations then displace the boundaries of the mechanical metamaterial inducing another nonlinear transformation due to the complex learned geometry of the pores; the resulting deformation corresponds to the “decoder”. By using a differentiable simulator of cellular solids we are able to learn in an end-to-end way both the neural network parameters and the pore shapes so that they can work in tandem. The resulting system exhibits morphological computation in that it learns to split the processing task across the neural network and the physical mechanism.
The paper is structured as follows. We first introduce the abstract setup for the neuromechanical autoencoder, followed by a brief description of our mechanics model, geometry representation, and differentiable simulation. Although important for the success of our method, the details of our dis-
cretization and solver for computational nonlinear elasticity problems are in the appendix. We then describe and detail results of our experiments, which include mechanical tasks, a shape matching experiment, and a new mechanical twist on MNIST classification. We end with related work and a discussion on future steps.
2 METHODS
2.1 NEUROMECHANICAL AUTOENCODER SETUP
We describe the overall setup as pictured in Figure 1. We begin by considering a bounded domain D ⊂ R2 (often square) on which our material exists (in this work we only consider the actuation of 2D geometries). When actuations are applied on the material, its deformation can be described by a displacement field u : D → R2, where u(X) represents the displacement of the particle originally at coordinate X ∈ D. For simplicity, assume u can be discretized and identified by a finite-dimensional vector q ∈ RN . The exact form of the discretization is based on a finite element method variant and is described in the appendix in Section A.1.
Next we specify a distribution of tasks T and an associated loss function L(q; ti) : RN × Rm → R, which depends on a task descriptor ti ∼ T , ti ∈ Rm, and a displacement field specified by q. The loss function often only looks at the deformation of a subset of the material, such as the displacement of a single point, but we are not restricted to this. The task descriptor is meant to be generic: it can be a coordinate, an image, a scalar parameter, etc.
To map task descriptors to displacements, we use a neural encoder Eθ : Rm → Rk and a mechanical decoder Dϕ : Rk → RN . The output of the encoder at ti is understood to be the latent dimension of the autoencoder, and represents the actuations to the mechanical structure. The goal is to choose parameters {θ, ϕ} to minimize the loss over the distribution of tasks:
θ∗, ϕ∗ = argmin θ,ϕ Et∼T [L(Dϕ(Eθ(t)); t)] .
Given ∇ϕL(Dϕ(Eθ(t)); t) and ∇θL(Dϕ(Eθ(t)); t) for t ∼ T , we can optimize the objective with standard first-order stochastic gradient methods. One difficulty is that Dϕ(·) is an implicit function of its inputs, computed by solving a partial differential equation (PDE). Furthermore, ϕ represents geometric parameters defining the domain on which the PDE is solved. To effectively compute derivatives of Dϕ, we developed a JAX-based (Bradbury et al., 2018) differentiable elasticity simulator, as described in the next section.
2.2 DIFFERENTIABLE SIMULATION
We developed a custom solver for static nonlinear elasticity problems which model the equilibrium of elastic materials under load. The goal is to have a robust and end-to-end differentiable simulator for 2D neuromechanical autoencoders based on mechanical metamaterials. Given geometric design parameters, our solver simulates the structure described by the parameters and computes the gradient (adjoint) with respect to both geometric design parameters and boundary conditions (actuations).
In order to make the solver differentiable with respect to geometric parameters, we implement a version of isogeometric analysis (IGA) (Hughes et al., 2005), a finite element method (FEM) (Hughes, 2012) variant where both the underlying solution and geometry basis are based on B-splines. Using B-spline patches allows us to parameterize our geometry in a flexible and yet robust way while maintaining a differentiable map from geometry parameters to PDE solution.
As our simulator is implemented entirely in JAX, we backpropagate gradients directly through both the simulator and a neural network using automatic differentiation and adjoint methods in tandem. In the next sections, we describe the relevant physics and the geometric representation we used.
2.3 MECHANICAL MODEL
We give a high level description of the mechanical model here, and detail it further in the appendix. In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy. The elastic properties are captured by a hyperelastic strain energy density function, which depends on the local deformation of the material and is independent of the path of deformation. For a given deformation, the potential energy functional Ψ(u) is the integral of the strain energy density over the material domain D. Given boundary conditions, the resulting physical deformation u : D → R2 is one that minimizes Ψ(u) subject to boundary conditions:
u∗ = argmin u∈H Ψ(u)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. NMA training is bi-level, where in the inner loop we perform the energy minimization using second-order methods. In the outer loop, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions, and gradients with respect to these can be computed using implicit differentiation.
2.4 GEOMETRY REPRESENTATION
The central unit of the metamaterials we design is the cell, a porous shape with a quadrilateral boundary. We initialize the geometry to a regular grid of square cells with simple square pore shapes, similar to that in Figure 2b. During training of the neuromechanical autoencoder, we modify this geometry to minimize the expected loss over a distribution of tasks.
To represent the geometry, we decompose the domain into B-spline patches, each with its own B-spline control points. Each cell is generally composed of four patches; we visualize the decomposition of a representative cell in Figure 2a. The shape of the cell pore is defined by radii (illustrated by ri), whose values specify relative distance of the pore edge from the centroid of the cell (e.g., a cell having radii all 0.0 corresponds to a completely closed cell). We combine all the radii in all cells into a radii array r ∈ [0, 1]R, which becomes one of our geometric parameters. For further flexibility, we also allow the shapes of the cells to change within a grid of cells. The corners of the cells, labeled ci in Figure 2b, are allowed to deviate within a specific box around its values in the initial square lattice-like geometry. Figure 2a shows a cell that its corners perturbed during training. The deviation bound ensures that the shapes do not degenerate during NMA training. The array of corner locations c and radii r comprise our geometric parameters. The outer boundary of the structure is constrained not to change during NMA optimization, as this would otherwise create inconsistent boundary conditions between designs.
2.5 DISCRETIZATION AND END-TO-END DIFFERENTIABILITY
The details of our discretization are in the Appendix. We mention two important notes here. The first is that careful selection of geometric parameters is critical to being able to differentiate with respect to them. In particular, given the geometric parameters we can construct a differentiable map to the B-spline control points representing the geometry of the model. The analogy in standard FEM would be that our “meshing” operation is fully differentiable. Part of the reason differentiability is always satisfied is that the cardinality and topology of the control points remain the same given any valid setting of geometric parameters. Another important note is that all of our geometric parameters, {r, c}, are only constrained by simple box constraints, so that first order constrained optimization with them is straightforward. This parameterization is robust in the sense that for any value of the geometric parameters within the box constraints, we have a valid geometry.
3 EXPERIMENTS
3.1 TRANSLATION AND ROTATION
The first task we tackle is how to perform translation given a limited degree of control. Consider the setup in Figure 3. We have a 5× 5 cellular solid fixed on the corners. The goal is to be able to move the green pointer in the middle of the solid to anywhere within the blue square given only horizontal displacements of the edges. The space of tasks is a small box B, and the task is defined by a single coordinate; the task descriptor t ∈ R2 is sampled uniformly from B. The task descriptor is mapped to two horizontal actuations by a simple fully-connected neural network (NN). The loss function is mean squared error of the green pointer after deformation from the goal.
In the absence of a domain-specific design, the allowed actuations limit the achievable motions to only left-right displacements. As demonstrated in Figure 3 (right), given a red goal point on the bottom right the best we can do with the unoptimized geometry is match the x-coordinate. If we do end-to-end NMA training, however, we converge on the shape in Figure 4. The learned geometry is able to achieve any translation within the blue square. The diagonal pore shapes enable translating compression of the material to downwards motion of the pointer, and tension to upwards motion. Here the joint learning of geometry and control offers a clear benefit: we converge to a nontrivial solution that discovers how to use its geometric nonlinearity with its NN controller.
To demonstrate transfer to the real world, we manufactured the resulting structure and qualitatively verified its behavior. As predicted by our simulation the neuromechanical autoencoder can facilitate displacement of a central point in the upwards via tension, downwards via compression, and diagonally via partial compression. See Figure 5 for details.
The next task is another mechanical task: rotation. We would like to see how the NN and metamaterial can work together to learn to translate linear actuation into rotation of part of the structure. The task description is now the angle of rotation: t ∈ [−π, π]. We have two setups for this problem. In the first, we use a small fully-connected NN to map angle into a single actuation, applied equally on both sides of the metamaterial (Figure 6). We apply actuations on both sides to avoid translating the middle square in addition to rotation. In the second, the NN maps into two actuations, one applied left-right and one applied top-down (Figure 7). In both cases we consider a 7× 7 metamaterial. The goal is rotation of the blue stick counter-clockwise by an angle t around the center; the loss function is mean square error from the goal of the two points on the ends of the stick.
In the first setup, we are able to achieve unidirectional rotation between [0, π/4] (Figure 6, right), while for the second setup, we can achieve bi-directional rotations between [−π/6, π/6] (Figure 7). Without geometry and control co-design, we would not be able to achieve rotation with linear actuation without an intuition-driven design, but the joint NMA training is able to make good progress.
3.2 SHAPE MATCHING
Next we consider a much higher-dimensional task space. Given a family of shapes parameterized by 2-dimensional coordinates, we would like to design a mechanical decoder and a neural encoder that can map coordinates to actuations deforming the structure to resemble a sampled shape from the family as closely as possible.
In particular, we consider a family generated by a log Gaussian process in polar coordinates, approximated via random Fourier features (Rahimi & Recht, 2007). Given a metamaterial with a large central pore, such as in Figure 8a, we would like to deform it to match any of the shapes in the
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
family. The task description t is an n × 2 dimensional array of coordinates defining the shape. A fully-connected neural network translates these to 12 actuations applied around the material (as shown in Figure 8a). The final loss function is an ℓ1 loss between the control points defining the middle pore after deformation and the points defining the shape. When comparing, we normalize the scale of both shapes, and perform Procrustes analysis for rotation invariance.
We train one version where the geometry and neural network are optimized jointly, and one version where only the neural network is optimized for the starting geometry. Figure 8a shows the nonoptimized and optimized geometry. After learning, Figure 9 shows qualitative results of how well the jointly learned metamaterial compares with the control-only material. Jointly learning geometry for the shape family allows us to capture much finer features in the target shape. In Figure 8b
we visualize the (stochastic) loss during training. The jointly learned metamaterial converges to a significantly lower loss value, showing the benefit of harnessing the geometric nonlinearity.
3.3 DIGITAL MNIST
For our last task we attempt to create a mechanical seven-segment display for classifying MNIST digits. Towards this we add an additional design variable for the material: color. Our starting metamaterial is pictured in Figure 10a, a version of metamaterial that is originally assigned a color value 0 everywhere. We treat color, parameterized by a B-spline patch over the metamaterial, as an additional geometric design parameter that can be optimized with NMA training.
Our input to the neural network is an image sampled from the MNIST dataset. The neural network then produces actuations that deform the metamaterial to produce a seven-segment representation of the MNIST digit when viewed through small slits. Figure 10b visualizes the learned colop map, and Figure 10c shows the structure with slits added. The loss function is manually specified for each digit, e.g. if an MNIST digit has a label of “1” then the right two slits should contain color value 1.0, while the rest should contain 0.0. The full setup is displayed in Figure 11, with samples after training displayed in Figure 12. Although this can be learned from scratch end-to-end, to speed up training we first learned colors and actuations to be able to reproduce all 10 digits, and then trained
a small feed-forward neural network to match the actuations for each digit. We then set up the entire pipeline and finetuned end-to-end for better performance. We note that the 7 segments are controlled by only 6 actuations, so by restricting the family of objects displayed to the 10 digits, we allow the neural encoder and mechanical decoder to learn underactuated control of all 7 segments. Additional samples are presented in the Appendix. We also note that the pore shapes did not have to change significantly to accomplish this task. The only “geometry” design was through the coloring, which as visualized in Figure 10b turns out to be highly nontrivial.
4 DISCUSSION AND RELATED WORK
Differentiable Simulation The abundance of differentiable simulators has demonstrated their usefulness in designing novel systems. Hu et al. (2019) developed a differentiable simulator with hand-written custom CUDA physics kernels that enabled material inference, control of a soft walker, and co-design of a swinging robot arm. Sanchez-Gonzalez et al. (2020) presents an ML framework to model a variety of physical domains to solve forward and inverse problems using a graph neural network approach. Mozaffar & Cao (2021) developed a differentiable finite element simulator to control and infer material parameters within the context of additive manufacturing processes. Liang et al. (2019) developed a differentiable cloth simulator, and Ham et al. (2019) automated the calculation of weak shape derivatives within the context of finite elements to solve PDE constrained shape optimization problems. In our work, our differentiable simulator is developed specifically to aid in neuromechanical autoencoder design.
Mechanical Metamaterials As the rational design of nonlinear mechanical materials is often unintuitive, modern machine learning approaches have enabled faster design. Deng et al. (2022) coupled a neural accelerated mass spring model that facilitated an evolutionary approach to design functional structures. Mao et al. (2020) applied generative adversarial networks to design unit cells for architected metamaterials, Kumar et al. (2020) introduces a novel class of anisotropic meta-
materials and a machine learning method for the inverse design of their geometry given desired elasticity properties, Beatson et al. (2020) learned a reduced order model to speed up simulation of cellular metamaterials, Xue et al. (2020) introduced a homogenization approach for cellular metamaterials, and Xue & Mao (2022) introduced a mapped shape approach to design metamaterials to fit a prescribed strain energy curve. Our work uses classical gradient/adjoint methods to optimize the geometric parameters, but could be combined with machine learning methods to speed up simulation and hence faster NMA training.
4.1 LIMITATIONS AND FUTURE WORK
We introduce the framework of neuromechanical autoencoders, inspired by the biological coevolution of control and morphology. We present a method for automatic design of these systems, and show a number of results that produce nontrivial behavior through co-design, both in simulation and in real-world. We believe this is a small but significant step in the road to designing mechanically-intelligent systems. The two major bottlenecks in our approach is the runtime of PDE solving and geometry parameterization. Fast PDE solving, especially for nonlinear PDEs such as the ones we use, is a very active area of research, and is crucial to scaling up NMA design. In terms of geometry parameterization, the key is to find a space of materials that have a complex range of mechanical deformation properties and yet are easy to simulate. For this paper, 2D cellular solids with nonuniform pore shapes were a great ansatz, but a future work could understand and quantify how much “computation” these materials can do. Scaling up to 3-dimensional intelligent mechanical models, as well as including dynamics, would significantly improve the computation capabilities, but would require much faster solvers. This is a key focus in our further work.
4.2 ACKNOWLEDGEMENTS
We would like to thank Alex Beatson, Geoffrey Roeder, Jordan Ash, and Tianju Xue for early conversations around this work. We also thank PT Brun for assisting with fabrication. This work was partially supported by NSF grants IIS-2007278 and OAC-2118201, the NSF under grant number 2127309 to the Computing Research Association for the CIFellows 2021 Project, and a Siemens PhD fellowship.
A APPENDIX
A.1 DISCRETIZATION AND IGA
We discretize the geometric domain into P isogeometric patches. All patches use the same Bspline basis functions Bij(ξ, η) Piegl & Tiller (1997) with i, j ∈ [np] and ξ, η ∈ [0, 1], where np is the number of control points. The domain of ξ and η correspond to the parent domain of each patch (Figure 13). In the B-spline literature, the parent domain is often referred to as the knot span. The basis functions are piecewise polynomial with degree specified as a parameter. In all of our experiments, we use piecewise quadratic B-spline functions. Each B-spline basis function corresponds to a control point, and each control point represents two degrees of freedom in 2D space, i.e., each patch p ∈ [P ] has np × np × 2 degrees of freedom (xijp and yijp ). The mapping from the parent domain of each patch to the physical domain is given by a linear combination of B-spline basis functions, where the weights of the linear combination are given by the control point coordinates. Explicitly, the mapping function ϕp from parent space of patch p to physical space is given by
ϕp(ξ, η;x,y) = np∑ i=1 np∑ j=1 [ xijp yijp ] Bij(ξ, η), (1)
where [xijp , y ij p ] are the control points parameterizing the mapping. In our simulator, we represent the reference configuration with reference control points [Xijp , Y ij p ]; these are determined by our geometry parameters {r, c} through a differentiable map. We then parameterize the deformed geometry with the same basis, using deformed control points [xijp , y ij p ]. For a given deformation, the integral in Eq 3 representing the potential energy can be computed by a pullback in the parent domain, using standard Gaussian quadrature as is standard in FEM (Hughes et al., 2005).
Dirichlet boundary conditions of the type we use in this paper can be represented as constraints on a subset of the control points. For each boundary condition, the corresponding reference and deformed control points are prescribed to have particular displacement values. Furthermore, since our geometry is decomposed into multiple neighboring patches, the control points must also have incidence constraints amongst them. These are kept track of using constraint groups, where each group has a representative element.
A.2 MECHANICAL MODEL
In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy function. In particular, we use a nearly incompressible Neo-Hookean material model (Ogden,
1997), in which the elastic properties are captured by a hyperelastic strain energy density function. This function, W (F),W : R2×2 → R, is independent of the path of deformation and is a function of the deformation gradient tensor, Fij = ∂ui/∂Xj + I where X ∈ D ⊂ R2 represents the position of a particle in the undeformed reference configuration, and u(X) is the displacement field. Here,
W (F) = µ
2 (I1 − 2− 2 log J) +
κ 2 (log J)2, (2)
where J = det(F), I1 = tr(FTF), and µ = E/2(1 + ν) and κ = E/3(1− 2ν) are shear and bulk moduli of a material with Young’s modulus E and Poisson’s ratio ν, respectively. This is a standard choice for hyperelastic material modeling that transfers well to the real-world. We can solve for the displacement by finding the stationary point of the potential energy functional Ψ(u),
u∗ = argmin u∈H Ψ(u) Ψ(u) = ∫ D W (F)dX = ∫ D W ( ∂u ∂X ∣∣∣∣ X=X′ + I ) dX′ (3)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. Abstractly, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions. The solution u∗ can be computed in a discretized form using standard second-order optimization algorithms, and gradients can be computed using implicit differentiation.
A.3 END-TO-END DIFFERENTIABILITY
We would like to reiterate that our differentiability conditions are satisfied, so that we can train neuromechanical autoencoders end-to-end. We first define a differentiable map from geometry parameters and Dirichlet boundary condition values into the reference B-spline control points. This is then used to construct the Πl,Πg functions, both of which are differentiable. The global vector q is then passed into a black-box optimizer to produce the solution q∗. Gradients with respect to the solution of the optimizer are computed using adjoint optimization. The solution q∗ is then mapped back into local coordinates using Πl, and is used to compute the loss function L. This pipeline ensures that we have a differentiable map from geometry parameters and boundary conditions (actuations) to the NMA task loss function, so we can proceed to train the NMA objective using stochastic gradient descent. In the next section, we demonstrate specific applications.
A.4 SOLVER DETAILS
After discretization to q, we solve the energy minimization using Newton’s method with incremental loading. The Hessian of the energy is assembled in sparse form using the trick from Powell & Toint (1979). Using the discretization of the system we automatically derive the sparsity pattern of the Hessian, and then construct appropriate binary vectors to perform Hessian-vector products with. We then reshape these into a CSR matrix representation of the Hessian.
The sparse linear systems are then solved by GMRES (Saad & Schultz, 1986) preconditioned by an incomplete LU decomposition. Since the energy involves log detF , where F is the deformation gradient, taking a finite step can lead to numerical blowup. Therefore our incremental loading is adaptive, and a line search is performed to avoid inversion of elements in the geometry.
A.5 EXPERIMENTAL DETAILS
All B-spline patches used were quadratic and contained 5× 5 control points. Quadrature was done by degree 5 Gauss-Legendre. Most computation was done on NVIDIA RTX 2080 GPUs. Each solve instance was done on a single GPU, and mini-batching was done by parallelizing with MPI. Each MPI task used a single GPU. The radii parameters were clipped to [0.1, 0.9] to aid solver stability.
A.5.1 TRANSLATION TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 2−30−30−10−2 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.2 ROTATION TASK
The learning rate was 0.001 ∗M for single and 0.01 ∗M for double rotation. M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 1 − 20 − 10 − 1 (including input/output) or 1 − 20 − 10 − 2 for the double actuation. The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.3 SHAPE MATCHING TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 16 MPI tasks. The neural network was a fully-connected network with activation sizes: 98−200−200−12 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.6 DIGITAL MNIST TASK
Initially we learned colors and actuations for a lookup table of 10 digits. This was trained with a learning rate of 0.01 ∗ M where M is the number of MPI tasks. We used 10 MPI tasks, one per digit. We clipped the maximum displacement to 60% of cell width using tanh. Afterwards, we trained a fully-connected neural network to map MNIST digits to the actuations of the corresponding digit. We then put the neural network to map directly to actuations, and finetuned end-to-end with a learning rate of 0.0001.
A.7 ADDITIONAL PORE MATCHING RESULTS
A.8 ADDITIONAL DIGITAL MNIST RESULTS | 1. What is the focus and contribution of the paper on metamaterials design?
2. What are the strengths of the proposed approach, particularly in terms of neural network-based control and simulation-based decoder?
3. What are the weaknesses of the paper regarding its reliance on existing methods and lack of discussion on time complexity and scalability?
4. Do you have any questions or concerns regarding the choice of loss functions, training data generation, and material models?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This works presents a co-design of metamaterials based mechanical/morphological actuation and neural network-based control. The control tasks are allowed to be image, coordinate on scalar parameter based on the end goal, which are mapped on to a latent dimension (using an encoder) matching the number of linear actuations. The decoder is a differentiable simulation that solves for equilibrium of non-linear elastic materials under mechanical load that takes the geometric design parameters and actuations (output from encoder). The solver is made differentiable wrt the geometry parameters (represented as a porous shape with quadrilateral boundary) by adopting isogeometic/FEM analysis and B-spline basis to solve the PDE. The results show that the geometry and control co-design is able to achieve translation, rotation and shape matching efficiently and better than geometry-only scenario. It is interesting to see the digital MNIST benchmark and the application of this approach to that case
Strengths And Weaknesses
Strengths:
The use of neural networks to converts tasks to actuations and that used along with the simulation-based decoder in a single differentiable framework shows great promise in achieving nontrivial displacements
The adoption of cellular solids and B-spline patches into the mechanical simulation/decoder seems to work well to produce stable solids.
Weaknesses:
No new algorithmic/machine learning approaches have been proposed
The work is just coupling the a simple multi-layer perceptron to a well-designed simulator (PDE solver) to achieve the joint optimization of geometry and boundary conditions
Time complexity of training/learning in this approach is not discussed
What is the motivation for using L2 loss in one case and L1 in the other?
How expensive is the generation of training data; is this approach scalable to train with larger datasets for more complex deformations?
What is the reason to choose the Neo-Hooken material model?
Clarity, Quality, Novelty And Reproducibility
The paper is well written and a good demonstration of machine learning application. Novelty in the machine learning algorithms is not present. Code is shared for reproducibility |
ICLR | Title
Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Abstract
Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as “mechanical intelligence” or “morphological computation”. In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectures— which we refer to as neuromechanical autoencoders—we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials—cellular solids, in particular—as the morphological substrate. Just as deep neural networks provide flexible and massivelyparametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a “digital MNIST” task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.
1 INTRODUCTION
Mechanical intelligence, or morphological computation (Paul, 2006; Hauser et al., 2011), is the idea that the physical dynamics of an actuator may interact with a control system to effectively reduce the computational burden of solving the control task. Biological systems perform morphological computation in a variety of ways, from the compliance of digits in primate grasping (Jeannerod, 2009; Heinemann et al., 2015), to the natural frequencies of legged locomotion (Collins et al., 2005; Holmes et al., 2006; Ting & McKay, 2007), to dead fish being able to “swim” in vortices (Beal et al., 2006; Lauder et al., 2007; Eldredge & Pisani, 2008). Both early (Sims, 1994) and modern (Gupta et al., 2021) work have used artificial evolutionary methods to design mechanical intelligence, but it has remained difficult to design systems de novo that are comparable to biological systems that have evolved over millions of years. We ask:
Can we instead learn morphological computation using gradient descent?
Morphological computation requires that a physical system be capable of performing complex tasks using, e.g., elastic deformation. The mechanical system’s nonlinear properties work in tandem with neural information processing so that challenging motor tasks require less computation. To learn an artificial mechanically-intelligent system, we must therefore be able to parameterize a rich space of mechanisms with the capability of implementing nonlinear physical “functions” that connect input forces or displacements to the desired output behaviors. There are various desiderata for such a
mechanical design space: 1) it must contain a wide variety of structures with complex nonlinear elastic deformation patterns; 2) its parameters should be differentiable and of fixed cardinality; and 3) the designs should be easily realizable with standard manufacturing techniques and materials. These characteristics are achieved by mechanical metamaterials.
Metamaterials are structured materials that have properties unavailable from natural materials. Although metamaterials are often discussed in the context of electromagnetic phenomena, there is substantial interest in the development of mechanical metamaterials in which geometric heterogeneity achieves unusual macroscopic behavior such as a negative Poisson’s ratio (Bertoldi et al., 2010). In biological systems, morphological computation often takes the form of sophisticated nonlinear compliance and deformation, resulting in a physical system that is more robust and easier to control for a variety of tasks (Paul, 2006; Hauser et al., 2011), This type of behavior is typically not present in off-the-shelf robotic systems and is difficult to design a priori. Mechanical metamaterials, on the other hand, offer a platform for mechanically-intelligent systems using relatively accessible manufacturing techniques, such as 3-D printing.
The mechanical metamaterials we explore in this paper are cellular solids: porous structures where different patterns of macroscopic pores can lead to different nonlinear deformation behaviors. By constructing a solid with a large number of such pores, and then parameterizing the pore shapes nonuniformly across the solid, it is possible to achieve a large design space of nonlinear mechanical structures while nevertheless having a differentiable representation of fixed cardinality. The key to modern machine learning has been the development of massively-parametric composable function approximators in the form of deep neural networks; cellular solids provide a natural physical analog and—as we show in this work—can also be learned with automatic differentiation.
To make progress towards the goal of learnable morphological computation, in this paper we combine metamaterials with deep neural networks into a framework we refer to as a neuromechanical autoencoder (NMA). While traditional mechanical metamaterials are designed for single tasks and actuations, here we propose designs that can solve problems drawn from a distribution over tasks, using a neural network to determine the appropriate actuations. The neural network “encoder” consumes a representation of the task—in this case, achieving a particular deformation—and nonlinearly transforms this into a set of linear actuations which play the role of the latent encoding. These actuations then displace the boundaries of the mechanical metamaterial inducing another nonlinear transformation due to the complex learned geometry of the pores; the resulting deformation corresponds to the “decoder”. By using a differentiable simulator of cellular solids we are able to learn in an end-to-end way both the neural network parameters and the pore shapes so that they can work in tandem. The resulting system exhibits morphological computation in that it learns to split the processing task across the neural network and the physical mechanism.
The paper is structured as follows. We first introduce the abstract setup for the neuromechanical autoencoder, followed by a brief description of our mechanics model, geometry representation, and differentiable simulation. Although important for the success of our method, the details of our dis-
cretization and solver for computational nonlinear elasticity problems are in the appendix. We then describe and detail results of our experiments, which include mechanical tasks, a shape matching experiment, and a new mechanical twist on MNIST classification. We end with related work and a discussion on future steps.
2 METHODS
2.1 NEUROMECHANICAL AUTOENCODER SETUP
We describe the overall setup as pictured in Figure 1. We begin by considering a bounded domain D ⊂ R2 (often square) on which our material exists (in this work we only consider the actuation of 2D geometries). When actuations are applied on the material, its deformation can be described by a displacement field u : D → R2, where u(X) represents the displacement of the particle originally at coordinate X ∈ D. For simplicity, assume u can be discretized and identified by a finite-dimensional vector q ∈ RN . The exact form of the discretization is based on a finite element method variant and is described in the appendix in Section A.1.
Next we specify a distribution of tasks T and an associated loss function L(q; ti) : RN × Rm → R, which depends on a task descriptor ti ∼ T , ti ∈ Rm, and a displacement field specified by q. The loss function often only looks at the deformation of a subset of the material, such as the displacement of a single point, but we are not restricted to this. The task descriptor is meant to be generic: it can be a coordinate, an image, a scalar parameter, etc.
To map task descriptors to displacements, we use a neural encoder Eθ : Rm → Rk and a mechanical decoder Dϕ : Rk → RN . The output of the encoder at ti is understood to be the latent dimension of the autoencoder, and represents the actuations to the mechanical structure. The goal is to choose parameters {θ, ϕ} to minimize the loss over the distribution of tasks:
θ∗, ϕ∗ = argmin θ,ϕ Et∼T [L(Dϕ(Eθ(t)); t)] .
Given ∇ϕL(Dϕ(Eθ(t)); t) and ∇θL(Dϕ(Eθ(t)); t) for t ∼ T , we can optimize the objective with standard first-order stochastic gradient methods. One difficulty is that Dϕ(·) is an implicit function of its inputs, computed by solving a partial differential equation (PDE). Furthermore, ϕ represents geometric parameters defining the domain on which the PDE is solved. To effectively compute derivatives of Dϕ, we developed a JAX-based (Bradbury et al., 2018) differentiable elasticity simulator, as described in the next section.
2.2 DIFFERENTIABLE SIMULATION
We developed a custom solver for static nonlinear elasticity problems which model the equilibrium of elastic materials under load. The goal is to have a robust and end-to-end differentiable simulator for 2D neuromechanical autoencoders based on mechanical metamaterials. Given geometric design parameters, our solver simulates the structure described by the parameters and computes the gradient (adjoint) with respect to both geometric design parameters and boundary conditions (actuations).
In order to make the solver differentiable with respect to geometric parameters, we implement a version of isogeometric analysis (IGA) (Hughes et al., 2005), a finite element method (FEM) (Hughes, 2012) variant where both the underlying solution and geometry basis are based on B-splines. Using B-spline patches allows us to parameterize our geometry in a flexible and yet robust way while maintaining a differentiable map from geometry parameters to PDE solution.
As our simulator is implemented entirely in JAX, we backpropagate gradients directly through both the simulator and a neural network using automatic differentiation and adjoint methods in tandem. In the next sections, we describe the relevant physics and the geometric representation we used.
2.3 MECHANICAL MODEL
We give a high level description of the mechanical model here, and detail it further in the appendix. In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy. The elastic properties are captured by a hyperelastic strain energy density function, which depends on the local deformation of the material and is independent of the path of deformation. For a given deformation, the potential energy functional Ψ(u) is the integral of the strain energy density over the material domain D. Given boundary conditions, the resulting physical deformation u : D → R2 is one that minimizes Ψ(u) subject to boundary conditions:
u∗ = argmin u∈H Ψ(u)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. NMA training is bi-level, where in the inner loop we perform the energy minimization using second-order methods. In the outer loop, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions, and gradients with respect to these can be computed using implicit differentiation.
2.4 GEOMETRY REPRESENTATION
The central unit of the metamaterials we design is the cell, a porous shape with a quadrilateral boundary. We initialize the geometry to a regular grid of square cells with simple square pore shapes, similar to that in Figure 2b. During training of the neuromechanical autoencoder, we modify this geometry to minimize the expected loss over a distribution of tasks.
To represent the geometry, we decompose the domain into B-spline patches, each with its own B-spline control points. Each cell is generally composed of four patches; we visualize the decomposition of a representative cell in Figure 2a. The shape of the cell pore is defined by radii (illustrated by ri), whose values specify relative distance of the pore edge from the centroid of the cell (e.g., a cell having radii all 0.0 corresponds to a completely closed cell). We combine all the radii in all cells into a radii array r ∈ [0, 1]R, which becomes one of our geometric parameters. For further flexibility, we also allow the shapes of the cells to change within a grid of cells. The corners of the cells, labeled ci in Figure 2b, are allowed to deviate within a specific box around its values in the initial square lattice-like geometry. Figure 2a shows a cell that its corners perturbed during training. The deviation bound ensures that the shapes do not degenerate during NMA training. The array of corner locations c and radii r comprise our geometric parameters. The outer boundary of the structure is constrained not to change during NMA optimization, as this would otherwise create inconsistent boundary conditions between designs.
2.5 DISCRETIZATION AND END-TO-END DIFFERENTIABILITY
The details of our discretization are in the Appendix. We mention two important notes here. The first is that careful selection of geometric parameters is critical to being able to differentiate with respect to them. In particular, given the geometric parameters we can construct a differentiable map to the B-spline control points representing the geometry of the model. The analogy in standard FEM would be that our “meshing” operation is fully differentiable. Part of the reason differentiability is always satisfied is that the cardinality and topology of the control points remain the same given any valid setting of geometric parameters. Another important note is that all of our geometric parameters, {r, c}, are only constrained by simple box constraints, so that first order constrained optimization with them is straightforward. This parameterization is robust in the sense that for any value of the geometric parameters within the box constraints, we have a valid geometry.
3 EXPERIMENTS
3.1 TRANSLATION AND ROTATION
The first task we tackle is how to perform translation given a limited degree of control. Consider the setup in Figure 3. We have a 5× 5 cellular solid fixed on the corners. The goal is to be able to move the green pointer in the middle of the solid to anywhere within the blue square given only horizontal displacements of the edges. The space of tasks is a small box B, and the task is defined by a single coordinate; the task descriptor t ∈ R2 is sampled uniformly from B. The task descriptor is mapped to two horizontal actuations by a simple fully-connected neural network (NN). The loss function is mean squared error of the green pointer after deformation from the goal.
In the absence of a domain-specific design, the allowed actuations limit the achievable motions to only left-right displacements. As demonstrated in Figure 3 (right), given a red goal point on the bottom right the best we can do with the unoptimized geometry is match the x-coordinate. If we do end-to-end NMA training, however, we converge on the shape in Figure 4. The learned geometry is able to achieve any translation within the blue square. The diagonal pore shapes enable translating compression of the material to downwards motion of the pointer, and tension to upwards motion. Here the joint learning of geometry and control offers a clear benefit: we converge to a nontrivial solution that discovers how to use its geometric nonlinearity with its NN controller.
To demonstrate transfer to the real world, we manufactured the resulting structure and qualitatively verified its behavior. As predicted by our simulation the neuromechanical autoencoder can facilitate displacement of a central point in the upwards via tension, downwards via compression, and diagonally via partial compression. See Figure 5 for details.
The next task is another mechanical task: rotation. We would like to see how the NN and metamaterial can work together to learn to translate linear actuation into rotation of part of the structure. The task description is now the angle of rotation: t ∈ [−π, π]. We have two setups for this problem. In the first, we use a small fully-connected NN to map angle into a single actuation, applied equally on both sides of the metamaterial (Figure 6). We apply actuations on both sides to avoid translating the middle square in addition to rotation. In the second, the NN maps into two actuations, one applied left-right and one applied top-down (Figure 7). In both cases we consider a 7× 7 metamaterial. The goal is rotation of the blue stick counter-clockwise by an angle t around the center; the loss function is mean square error from the goal of the two points on the ends of the stick.
In the first setup, we are able to achieve unidirectional rotation between [0, π/4] (Figure 6, right), while for the second setup, we can achieve bi-directional rotations between [−π/6, π/6] (Figure 7). Without geometry and control co-design, we would not be able to achieve rotation with linear actuation without an intuition-driven design, but the joint NMA training is able to make good progress.
3.2 SHAPE MATCHING
Next we consider a much higher-dimensional task space. Given a family of shapes parameterized by 2-dimensional coordinates, we would like to design a mechanical decoder and a neural encoder that can map coordinates to actuations deforming the structure to resemble a sampled shape from the family as closely as possible.
In particular, we consider a family generated by a log Gaussian process in polar coordinates, approximated via random Fourier features (Rahimi & Recht, 2007). Given a metamaterial with a large central pore, such as in Figure 8a, we would like to deform it to match any of the shapes in the
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
family. The task description t is an n × 2 dimensional array of coordinates defining the shape. A fully-connected neural network translates these to 12 actuations applied around the material (as shown in Figure 8a). The final loss function is an ℓ1 loss between the control points defining the middle pore after deformation and the points defining the shape. When comparing, we normalize the scale of both shapes, and perform Procrustes analysis for rotation invariance.
We train one version where the geometry and neural network are optimized jointly, and one version where only the neural network is optimized for the starting geometry. Figure 8a shows the nonoptimized and optimized geometry. After learning, Figure 9 shows qualitative results of how well the jointly learned metamaterial compares with the control-only material. Jointly learning geometry for the shape family allows us to capture much finer features in the target shape. In Figure 8b
we visualize the (stochastic) loss during training. The jointly learned metamaterial converges to a significantly lower loss value, showing the benefit of harnessing the geometric nonlinearity.
3.3 DIGITAL MNIST
For our last task we attempt to create a mechanical seven-segment display for classifying MNIST digits. Towards this we add an additional design variable for the material: color. Our starting metamaterial is pictured in Figure 10a, a version of metamaterial that is originally assigned a color value 0 everywhere. We treat color, parameterized by a B-spline patch over the metamaterial, as an additional geometric design parameter that can be optimized with NMA training.
Our input to the neural network is an image sampled from the MNIST dataset. The neural network then produces actuations that deform the metamaterial to produce a seven-segment representation of the MNIST digit when viewed through small slits. Figure 10b visualizes the learned colop map, and Figure 10c shows the structure with slits added. The loss function is manually specified for each digit, e.g. if an MNIST digit has a label of “1” then the right two slits should contain color value 1.0, while the rest should contain 0.0. The full setup is displayed in Figure 11, with samples after training displayed in Figure 12. Although this can be learned from scratch end-to-end, to speed up training we first learned colors and actuations to be able to reproduce all 10 digits, and then trained
a small feed-forward neural network to match the actuations for each digit. We then set up the entire pipeline and finetuned end-to-end for better performance. We note that the 7 segments are controlled by only 6 actuations, so by restricting the family of objects displayed to the 10 digits, we allow the neural encoder and mechanical decoder to learn underactuated control of all 7 segments. Additional samples are presented in the Appendix. We also note that the pore shapes did not have to change significantly to accomplish this task. The only “geometry” design was through the coloring, which as visualized in Figure 10b turns out to be highly nontrivial.
4 DISCUSSION AND RELATED WORK
Differentiable Simulation The abundance of differentiable simulators has demonstrated their usefulness in designing novel systems. Hu et al. (2019) developed a differentiable simulator with hand-written custom CUDA physics kernels that enabled material inference, control of a soft walker, and co-design of a swinging robot arm. Sanchez-Gonzalez et al. (2020) presents an ML framework to model a variety of physical domains to solve forward and inverse problems using a graph neural network approach. Mozaffar & Cao (2021) developed a differentiable finite element simulator to control and infer material parameters within the context of additive manufacturing processes. Liang et al. (2019) developed a differentiable cloth simulator, and Ham et al. (2019) automated the calculation of weak shape derivatives within the context of finite elements to solve PDE constrained shape optimization problems. In our work, our differentiable simulator is developed specifically to aid in neuromechanical autoencoder design.
Mechanical Metamaterials As the rational design of nonlinear mechanical materials is often unintuitive, modern machine learning approaches have enabled faster design. Deng et al. (2022) coupled a neural accelerated mass spring model that facilitated an evolutionary approach to design functional structures. Mao et al. (2020) applied generative adversarial networks to design unit cells for architected metamaterials, Kumar et al. (2020) introduces a novel class of anisotropic meta-
materials and a machine learning method for the inverse design of their geometry given desired elasticity properties, Beatson et al. (2020) learned a reduced order model to speed up simulation of cellular metamaterials, Xue et al. (2020) introduced a homogenization approach for cellular metamaterials, and Xue & Mao (2022) introduced a mapped shape approach to design metamaterials to fit a prescribed strain energy curve. Our work uses classical gradient/adjoint methods to optimize the geometric parameters, but could be combined with machine learning methods to speed up simulation and hence faster NMA training.
4.1 LIMITATIONS AND FUTURE WORK
We introduce the framework of neuromechanical autoencoders, inspired by the biological coevolution of control and morphology. We present a method for automatic design of these systems, and show a number of results that produce nontrivial behavior through co-design, both in simulation and in real-world. We believe this is a small but significant step in the road to designing mechanically-intelligent systems. The two major bottlenecks in our approach is the runtime of PDE solving and geometry parameterization. Fast PDE solving, especially for nonlinear PDEs such as the ones we use, is a very active area of research, and is crucial to scaling up NMA design. In terms of geometry parameterization, the key is to find a space of materials that have a complex range of mechanical deformation properties and yet are easy to simulate. For this paper, 2D cellular solids with nonuniform pore shapes were a great ansatz, but a future work could understand and quantify how much “computation” these materials can do. Scaling up to 3-dimensional intelligent mechanical models, as well as including dynamics, would significantly improve the computation capabilities, but would require much faster solvers. This is a key focus in our further work.
4.2 ACKNOWLEDGEMENTS
We would like to thank Alex Beatson, Geoffrey Roeder, Jordan Ash, and Tianju Xue for early conversations around this work. We also thank PT Brun for assisting with fabrication. This work was partially supported by NSF grants IIS-2007278 and OAC-2118201, the NSF under grant number 2127309 to the Computing Research Association for the CIFellows 2021 Project, and a Siemens PhD fellowship.
A APPENDIX
A.1 DISCRETIZATION AND IGA
We discretize the geometric domain into P isogeometric patches. All patches use the same Bspline basis functions Bij(ξ, η) Piegl & Tiller (1997) with i, j ∈ [np] and ξ, η ∈ [0, 1], where np is the number of control points. The domain of ξ and η correspond to the parent domain of each patch (Figure 13). In the B-spline literature, the parent domain is often referred to as the knot span. The basis functions are piecewise polynomial with degree specified as a parameter. In all of our experiments, we use piecewise quadratic B-spline functions. Each B-spline basis function corresponds to a control point, and each control point represents two degrees of freedom in 2D space, i.e., each patch p ∈ [P ] has np × np × 2 degrees of freedom (xijp and yijp ). The mapping from the parent domain of each patch to the physical domain is given by a linear combination of B-spline basis functions, where the weights of the linear combination are given by the control point coordinates. Explicitly, the mapping function ϕp from parent space of patch p to physical space is given by
ϕp(ξ, η;x,y) = np∑ i=1 np∑ j=1 [ xijp yijp ] Bij(ξ, η), (1)
where [xijp , y ij p ] are the control points parameterizing the mapping. In our simulator, we represent the reference configuration with reference control points [Xijp , Y ij p ]; these are determined by our geometry parameters {r, c} through a differentiable map. We then parameterize the deformed geometry with the same basis, using deformed control points [xijp , y ij p ]. For a given deformation, the integral in Eq 3 representing the potential energy can be computed by a pullback in the parent domain, using standard Gaussian quadrature as is standard in FEM (Hughes et al., 2005).
Dirichlet boundary conditions of the type we use in this paper can be represented as constraints on a subset of the control points. For each boundary condition, the corresponding reference and deformed control points are prescribed to have particular displacement values. Furthermore, since our geometry is decomposed into multiple neighboring patches, the control points must also have incidence constraints amongst them. These are kept track of using constraint groups, where each group has a representative element.
A.2 MECHANICAL MODEL
In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy function. In particular, we use a nearly incompressible Neo-Hookean material model (Ogden,
1997), in which the elastic properties are captured by a hyperelastic strain energy density function. This function, W (F),W : R2×2 → R, is independent of the path of deformation and is a function of the deformation gradient tensor, Fij = ∂ui/∂Xj + I where X ∈ D ⊂ R2 represents the position of a particle in the undeformed reference configuration, and u(X) is the displacement field. Here,
W (F) = µ
2 (I1 − 2− 2 log J) +
κ 2 (log J)2, (2)
where J = det(F), I1 = tr(FTF), and µ = E/2(1 + ν) and κ = E/3(1− 2ν) are shear and bulk moduli of a material with Young’s modulus E and Poisson’s ratio ν, respectively. This is a standard choice for hyperelastic material modeling that transfers well to the real-world. We can solve for the displacement by finding the stationary point of the potential energy functional Ψ(u),
u∗ = argmin u∈H Ψ(u) Ψ(u) = ∫ D W (F)dX = ∫ D W ( ∂u ∂X ∣∣∣∣ X=X′ + I ) dX′ (3)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. Abstractly, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions. The solution u∗ can be computed in a discretized form using standard second-order optimization algorithms, and gradients can be computed using implicit differentiation.
A.3 END-TO-END DIFFERENTIABILITY
We would like to reiterate that our differentiability conditions are satisfied, so that we can train neuromechanical autoencoders end-to-end. We first define a differentiable map from geometry parameters and Dirichlet boundary condition values into the reference B-spline control points. This is then used to construct the Πl,Πg functions, both of which are differentiable. The global vector q is then passed into a black-box optimizer to produce the solution q∗. Gradients with respect to the solution of the optimizer are computed using adjoint optimization. The solution q∗ is then mapped back into local coordinates using Πl, and is used to compute the loss function L. This pipeline ensures that we have a differentiable map from geometry parameters and boundary conditions (actuations) to the NMA task loss function, so we can proceed to train the NMA objective using stochastic gradient descent. In the next section, we demonstrate specific applications.
A.4 SOLVER DETAILS
After discretization to q, we solve the energy minimization using Newton’s method with incremental loading. The Hessian of the energy is assembled in sparse form using the trick from Powell & Toint (1979). Using the discretization of the system we automatically derive the sparsity pattern of the Hessian, and then construct appropriate binary vectors to perform Hessian-vector products with. We then reshape these into a CSR matrix representation of the Hessian.
The sparse linear systems are then solved by GMRES (Saad & Schultz, 1986) preconditioned by an incomplete LU decomposition. Since the energy involves log detF , where F is the deformation gradient, taking a finite step can lead to numerical blowup. Therefore our incremental loading is adaptive, and a line search is performed to avoid inversion of elements in the geometry.
A.5 EXPERIMENTAL DETAILS
All B-spline patches used were quadratic and contained 5× 5 control points. Quadrature was done by degree 5 Gauss-Legendre. Most computation was done on NVIDIA RTX 2080 GPUs. Each solve instance was done on a single GPU, and mini-batching was done by parallelizing with MPI. Each MPI task used a single GPU. The radii parameters were clipped to [0.1, 0.9] to aid solver stability.
A.5.1 TRANSLATION TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 2−30−30−10−2 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.2 ROTATION TASK
The learning rate was 0.001 ∗M for single and 0.01 ∗M for double rotation. M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 1 − 20 − 10 − 1 (including input/output) or 1 − 20 − 10 − 2 for the double actuation. The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.3 SHAPE MATCHING TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 16 MPI tasks. The neural network was a fully-connected network with activation sizes: 98−200−200−12 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.6 DIGITAL MNIST TASK
Initially we learned colors and actuations for a lookup table of 10 digits. This was trained with a learning rate of 0.01 ∗ M where M is the number of MPI tasks. We used 10 MPI tasks, one per digit. We clipped the maximum displacement to 60% of cell width using tanh. Afterwards, we trained a fully-connected neural network to map MNIST digits to the actuations of the corresponding digit. We then put the neural network to map directly to actuations, and finetuned end-to-end with a learning rate of 0.0001.
A.7 ADDITIONAL PORE MATCHING RESULTS
A.8 ADDITIONAL DIGITAL MNIST RESULTS | 1. What is the focus and contribution of the paper on neuromechanical autoencoders?
2. What are the strengths of the proposed approach, particularly in terms of its technical soundness and potential applications?
3. What are the weaknesses of the paper, especially regarding the readability of certain methodology sections?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors introduce neuromechanical autoencoders (NMA) -- neural encoder combined with a differentiable mechanical decoder -- to learn to perform morphological computation using gradient descent. The authors show how it is feasible for the decoder to perform translation, rotation and matching shapes specified to the encoder via actuations output by the encoder. They also demonstrate the performance of NMA on a material they manufactured to verify how the model works in the real-world.
Strengths And Weaknesses
Strengths:
The paper presents a highly novel and technically sound integration of neural network-based encoders with differentiable mechanical decoders. This combination potentially has several applications in the real-world such as robotics, material design, 3D printing, etc.
The digital MNIST task that the authors develop -- in which the neural encoder receives an RGB image of a digit as input and the mechanical decoder produces deformations to the metamaterial that classifies the digit -- is very intriguing and a great demonstration of learning (with gradient descent) a solution to the common and accessible MNIST task through morphological computation.
The authors share their code as part of the submission, this should is helpful in more thoroughly understanding the proposed work.
Weaknesses:
I found it hard to grasp Methods subsections 2.3 and 2.5. I am not an expert in this domain and I'm not surprised that I find it hard to read these sections, but it would be great if the authors could please see if they can further improve readability of these sections for the broader audience.
Clarity, Quality, Novelty And Reproducibility
I believe that the presented work is technically sound and makes highly novel contributions with potential applications in interdisciplinary areas. The work has been written clearly, yet, I think it will be hard for the broader audience to grasp some important parts of the paper as mentioned in my review. In terms of reproducibility, the authors have shared their code used in the paper to reproduce results. |
ICLR | Title
Neuromechanical Autoencoders: Learning to Couple Elastic and Neural Network Nonlinearity
Abstract
Intelligent biological systems are characterized by their embodiment in a complex environment and the intimate interplay between their nervous systems and the nonlinear mechanical properties of their bodies. This coordination, in which the dynamics of the motor system co-evolved to reduce the computational burden on the brain, is referred to as “mechanical intelligence” or “morphological computation”. In this work, we seek to develop machine learning analogs of this process, in which we jointly learn the morphology of complex nonlinear elastic solids along with a deep neural network to control it. By using a specialized differentiable simulator of elastic mechanics coupled to conventional deep learning architectures— which we refer to as neuromechanical autoencoders—we are able to learn to perform morphological computation via gradient descent. Key to our approach is the use of mechanical metamaterials—cellular solids, in particular—as the morphological substrate. Just as deep neural networks provide flexible and massivelyparametric function approximators for perceptual and control tasks, cellular solid metamaterials are promising as a rich and learnable space for approximating a variety of actuation tasks. In this work we take advantage of these complementary computational concepts to co-design materials and neural network controls to achieve nonintuitive mechanical behavior. We demonstrate in simulation how it is possible to achieve translation, rotation, and shape matching, as well as a “digital MNIST” task. We additionally manufacture and evaluate one of the designs to verify its real-world behavior.
1 INTRODUCTION
Mechanical intelligence, or morphological computation (Paul, 2006; Hauser et al., 2011), is the idea that the physical dynamics of an actuator may interact with a control system to effectively reduce the computational burden of solving the control task. Biological systems perform morphological computation in a variety of ways, from the compliance of digits in primate grasping (Jeannerod, 2009; Heinemann et al., 2015), to the natural frequencies of legged locomotion (Collins et al., 2005; Holmes et al., 2006; Ting & McKay, 2007), to dead fish being able to “swim” in vortices (Beal et al., 2006; Lauder et al., 2007; Eldredge & Pisani, 2008). Both early (Sims, 1994) and modern (Gupta et al., 2021) work have used artificial evolutionary methods to design mechanical intelligence, but it has remained difficult to design systems de novo that are comparable to biological systems that have evolved over millions of years. We ask:
Can we instead learn morphological computation using gradient descent?
Morphological computation requires that a physical system be capable of performing complex tasks using, e.g., elastic deformation. The mechanical system’s nonlinear properties work in tandem with neural information processing so that challenging motor tasks require less computation. To learn an artificial mechanically-intelligent system, we must therefore be able to parameterize a rich space of mechanisms with the capability of implementing nonlinear physical “functions” that connect input forces or displacements to the desired output behaviors. There are various desiderata for such a
mechanical design space: 1) it must contain a wide variety of structures with complex nonlinear elastic deformation patterns; 2) its parameters should be differentiable and of fixed cardinality; and 3) the designs should be easily realizable with standard manufacturing techniques and materials. These characteristics are achieved by mechanical metamaterials.
Metamaterials are structured materials that have properties unavailable from natural materials. Although metamaterials are often discussed in the context of electromagnetic phenomena, there is substantial interest in the development of mechanical metamaterials in which geometric heterogeneity achieves unusual macroscopic behavior such as a negative Poisson’s ratio (Bertoldi et al., 2010). In biological systems, morphological computation often takes the form of sophisticated nonlinear compliance and deformation, resulting in a physical system that is more robust and easier to control for a variety of tasks (Paul, 2006; Hauser et al., 2011), This type of behavior is typically not present in off-the-shelf robotic systems and is difficult to design a priori. Mechanical metamaterials, on the other hand, offer a platform for mechanically-intelligent systems using relatively accessible manufacturing techniques, such as 3-D printing.
The mechanical metamaterials we explore in this paper are cellular solids: porous structures where different patterns of macroscopic pores can lead to different nonlinear deformation behaviors. By constructing a solid with a large number of such pores, and then parameterizing the pore shapes nonuniformly across the solid, it is possible to achieve a large design space of nonlinear mechanical structures while nevertheless having a differentiable representation of fixed cardinality. The key to modern machine learning has been the development of massively-parametric composable function approximators in the form of deep neural networks; cellular solids provide a natural physical analog and—as we show in this work—can also be learned with automatic differentiation.
To make progress towards the goal of learnable morphological computation, in this paper we combine metamaterials with deep neural networks into a framework we refer to as a neuromechanical autoencoder (NMA). While traditional mechanical metamaterials are designed for single tasks and actuations, here we propose designs that can solve problems drawn from a distribution over tasks, using a neural network to determine the appropriate actuations. The neural network “encoder” consumes a representation of the task—in this case, achieving a particular deformation—and nonlinearly transforms this into a set of linear actuations which play the role of the latent encoding. These actuations then displace the boundaries of the mechanical metamaterial inducing another nonlinear transformation due to the complex learned geometry of the pores; the resulting deformation corresponds to the “decoder”. By using a differentiable simulator of cellular solids we are able to learn in an end-to-end way both the neural network parameters and the pore shapes so that they can work in tandem. The resulting system exhibits morphological computation in that it learns to split the processing task across the neural network and the physical mechanism.
The paper is structured as follows. We first introduce the abstract setup for the neuromechanical autoencoder, followed by a brief description of our mechanics model, geometry representation, and differentiable simulation. Although important for the success of our method, the details of our dis-
cretization and solver for computational nonlinear elasticity problems are in the appendix. We then describe and detail results of our experiments, which include mechanical tasks, a shape matching experiment, and a new mechanical twist on MNIST classification. We end with related work and a discussion on future steps.
2 METHODS
2.1 NEUROMECHANICAL AUTOENCODER SETUP
We describe the overall setup as pictured in Figure 1. We begin by considering a bounded domain D ⊂ R2 (often square) on which our material exists (in this work we only consider the actuation of 2D geometries). When actuations are applied on the material, its deformation can be described by a displacement field u : D → R2, where u(X) represents the displacement of the particle originally at coordinate X ∈ D. For simplicity, assume u can be discretized and identified by a finite-dimensional vector q ∈ RN . The exact form of the discretization is based on a finite element method variant and is described in the appendix in Section A.1.
Next we specify a distribution of tasks T and an associated loss function L(q; ti) : RN × Rm → R, which depends on a task descriptor ti ∼ T , ti ∈ Rm, and a displacement field specified by q. The loss function often only looks at the deformation of a subset of the material, such as the displacement of a single point, but we are not restricted to this. The task descriptor is meant to be generic: it can be a coordinate, an image, a scalar parameter, etc.
To map task descriptors to displacements, we use a neural encoder Eθ : Rm → Rk and a mechanical decoder Dϕ : Rk → RN . The output of the encoder at ti is understood to be the latent dimension of the autoencoder, and represents the actuations to the mechanical structure. The goal is to choose parameters {θ, ϕ} to minimize the loss over the distribution of tasks:
θ∗, ϕ∗ = argmin θ,ϕ Et∼T [L(Dϕ(Eθ(t)); t)] .
Given ∇ϕL(Dϕ(Eθ(t)); t) and ∇θL(Dϕ(Eθ(t)); t) for t ∼ T , we can optimize the objective with standard first-order stochastic gradient methods. One difficulty is that Dϕ(·) is an implicit function of its inputs, computed by solving a partial differential equation (PDE). Furthermore, ϕ represents geometric parameters defining the domain on which the PDE is solved. To effectively compute derivatives of Dϕ, we developed a JAX-based (Bradbury et al., 2018) differentiable elasticity simulator, as described in the next section.
2.2 DIFFERENTIABLE SIMULATION
We developed a custom solver for static nonlinear elasticity problems which model the equilibrium of elastic materials under load. The goal is to have a robust and end-to-end differentiable simulator for 2D neuromechanical autoencoders based on mechanical metamaterials. Given geometric design parameters, our solver simulates the structure described by the parameters and computes the gradient (adjoint) with respect to both geometric design parameters and boundary conditions (actuations).
In order to make the solver differentiable with respect to geometric parameters, we implement a version of isogeometric analysis (IGA) (Hughes et al., 2005), a finite element method (FEM) (Hughes, 2012) variant where both the underlying solution and geometry basis are based on B-splines. Using B-spline patches allows us to parameterize our geometry in a flexible and yet robust way while maintaining a differentiable map from geometry parameters to PDE solution.
As our simulator is implemented entirely in JAX, we backpropagate gradients directly through both the simulator and a neural network using automatic differentiation and adjoint methods in tandem. In the next sections, we describe the relevant physics and the geometric representation we used.
2.3 MECHANICAL MODEL
We give a high level description of the mechanical model here, and detail it further in the appendix. In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy. The elastic properties are captured by a hyperelastic strain energy density function, which depends on the local deformation of the material and is independent of the path of deformation. For a given deformation, the potential energy functional Ψ(u) is the integral of the strain energy density over the material domain D. Given boundary conditions, the resulting physical deformation u : D → R2 is one that minimizes Ψ(u) subject to boundary conditions:
u∗ = argmin u∈H Ψ(u)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. NMA training is bi-level, where in the inner loop we perform the energy minimization using second-order methods. In the outer loop, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions, and gradients with respect to these can be computed using implicit differentiation.
2.4 GEOMETRY REPRESENTATION
The central unit of the metamaterials we design is the cell, a porous shape with a quadrilateral boundary. We initialize the geometry to a regular grid of square cells with simple square pore shapes, similar to that in Figure 2b. During training of the neuromechanical autoencoder, we modify this geometry to minimize the expected loss over a distribution of tasks.
To represent the geometry, we decompose the domain into B-spline patches, each with its own B-spline control points. Each cell is generally composed of four patches; we visualize the decomposition of a representative cell in Figure 2a. The shape of the cell pore is defined by radii (illustrated by ri), whose values specify relative distance of the pore edge from the centroid of the cell (e.g., a cell having radii all 0.0 corresponds to a completely closed cell). We combine all the radii in all cells into a radii array r ∈ [0, 1]R, which becomes one of our geometric parameters. For further flexibility, we also allow the shapes of the cells to change within a grid of cells. The corners of the cells, labeled ci in Figure 2b, are allowed to deviate within a specific box around its values in the initial square lattice-like geometry. Figure 2a shows a cell that its corners perturbed during training. The deviation bound ensures that the shapes do not degenerate during NMA training. The array of corner locations c and radii r comprise our geometric parameters. The outer boundary of the structure is constrained not to change during NMA optimization, as this would otherwise create inconsistent boundary conditions between designs.
2.5 DISCRETIZATION AND END-TO-END DIFFERENTIABILITY
The details of our discretization are in the Appendix. We mention two important notes here. The first is that careful selection of geometric parameters is critical to being able to differentiate with respect to them. In particular, given the geometric parameters we can construct a differentiable map to the B-spline control points representing the geometry of the model. The analogy in standard FEM would be that our “meshing” operation is fully differentiable. Part of the reason differentiability is always satisfied is that the cardinality and topology of the control points remain the same given any valid setting of geometric parameters. Another important note is that all of our geometric parameters, {r, c}, are only constrained by simple box constraints, so that first order constrained optimization with them is straightforward. This parameterization is robust in the sense that for any value of the geometric parameters within the box constraints, we have a valid geometry.
3 EXPERIMENTS
3.1 TRANSLATION AND ROTATION
The first task we tackle is how to perform translation given a limited degree of control. Consider the setup in Figure 3. We have a 5× 5 cellular solid fixed on the corners. The goal is to be able to move the green pointer in the middle of the solid to anywhere within the blue square given only horizontal displacements of the edges. The space of tasks is a small box B, and the task is defined by a single coordinate; the task descriptor t ∈ R2 is sampled uniformly from B. The task descriptor is mapped to two horizontal actuations by a simple fully-connected neural network (NN). The loss function is mean squared error of the green pointer after deformation from the goal.
In the absence of a domain-specific design, the allowed actuations limit the achievable motions to only left-right displacements. As demonstrated in Figure 3 (right), given a red goal point on the bottom right the best we can do with the unoptimized geometry is match the x-coordinate. If we do end-to-end NMA training, however, we converge on the shape in Figure 4. The learned geometry is able to achieve any translation within the blue square. The diagonal pore shapes enable translating compression of the material to downwards motion of the pointer, and tension to upwards motion. Here the joint learning of geometry and control offers a clear benefit: we converge to a nontrivial solution that discovers how to use its geometric nonlinearity with its NN controller.
To demonstrate transfer to the real world, we manufactured the resulting structure and qualitatively verified its behavior. As predicted by our simulation the neuromechanical autoencoder can facilitate displacement of a central point in the upwards via tension, downwards via compression, and diagonally via partial compression. See Figure 5 for details.
The next task is another mechanical task: rotation. We would like to see how the NN and metamaterial can work together to learn to translate linear actuation into rotation of part of the structure. The task description is now the angle of rotation: t ∈ [−π, π]. We have two setups for this problem. In the first, we use a small fully-connected NN to map angle into a single actuation, applied equally on both sides of the metamaterial (Figure 6). We apply actuations on both sides to avoid translating the middle square in addition to rotation. In the second, the NN maps into two actuations, one applied left-right and one applied top-down (Figure 7). In both cases we consider a 7× 7 metamaterial. The goal is rotation of the blue stick counter-clockwise by an angle t around the center; the loss function is mean square error from the goal of the two points on the ends of the stick.
In the first setup, we are able to achieve unidirectional rotation between [0, π/4] (Figure 6, right), while for the second setup, we can achieve bi-directional rotations between [−π/6, π/6] (Figure 7). Without geometry and control co-design, we would not be able to achieve rotation with linear actuation without an intuition-driven design, but the joint NMA training is able to make good progress.
3.2 SHAPE MATCHING
Next we consider a much higher-dimensional task space. Given a family of shapes parameterized by 2-dimensional coordinates, we would like to design a mechanical decoder and a neural encoder that can map coordinates to actuations deforming the structure to resemble a sampled shape from the family as closely as possible.
In particular, we consider a family generated by a log Gaussian process in polar coordinates, approximated via random Fourier features (Rahimi & Recht, 2007). Given a metamaterial with a large central pore, such as in Figure 8a, we would like to deform it to match any of the shapes in the
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
1.0
1.5
2.0
Target NN only
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN + Geometry
2.0 1.5 1.0 0.5 0.0 0.5 1.0 1.5 2.0 2.0
1.5
1.0
0.5
0.0
0.5
1.0
1.5
2.0
Target NN only
family. The task description t is an n × 2 dimensional array of coordinates defining the shape. A fully-connected neural network translates these to 12 actuations applied around the material (as shown in Figure 8a). The final loss function is an ℓ1 loss between the control points defining the middle pore after deformation and the points defining the shape. When comparing, we normalize the scale of both shapes, and perform Procrustes analysis for rotation invariance.
We train one version where the geometry and neural network are optimized jointly, and one version where only the neural network is optimized for the starting geometry. Figure 8a shows the nonoptimized and optimized geometry. After learning, Figure 9 shows qualitative results of how well the jointly learned metamaterial compares with the control-only material. Jointly learning geometry for the shape family allows us to capture much finer features in the target shape. In Figure 8b
we visualize the (stochastic) loss during training. The jointly learned metamaterial converges to a significantly lower loss value, showing the benefit of harnessing the geometric nonlinearity.
3.3 DIGITAL MNIST
For our last task we attempt to create a mechanical seven-segment display for classifying MNIST digits. Towards this we add an additional design variable for the material: color. Our starting metamaterial is pictured in Figure 10a, a version of metamaterial that is originally assigned a color value 0 everywhere. We treat color, parameterized by a B-spline patch over the metamaterial, as an additional geometric design parameter that can be optimized with NMA training.
Our input to the neural network is an image sampled from the MNIST dataset. The neural network then produces actuations that deform the metamaterial to produce a seven-segment representation of the MNIST digit when viewed through small slits. Figure 10b visualizes the learned colop map, and Figure 10c shows the structure with slits added. The loss function is manually specified for each digit, e.g. if an MNIST digit has a label of “1” then the right two slits should contain color value 1.0, while the rest should contain 0.0. The full setup is displayed in Figure 11, with samples after training displayed in Figure 12. Although this can be learned from scratch end-to-end, to speed up training we first learned colors and actuations to be able to reproduce all 10 digits, and then trained
a small feed-forward neural network to match the actuations for each digit. We then set up the entire pipeline and finetuned end-to-end for better performance. We note that the 7 segments are controlled by only 6 actuations, so by restricting the family of objects displayed to the 10 digits, we allow the neural encoder and mechanical decoder to learn underactuated control of all 7 segments. Additional samples are presented in the Appendix. We also note that the pore shapes did not have to change significantly to accomplish this task. The only “geometry” design was through the coloring, which as visualized in Figure 10b turns out to be highly nontrivial.
4 DISCUSSION AND RELATED WORK
Differentiable Simulation The abundance of differentiable simulators has demonstrated their usefulness in designing novel systems. Hu et al. (2019) developed a differentiable simulator with hand-written custom CUDA physics kernels that enabled material inference, control of a soft walker, and co-design of a swinging robot arm. Sanchez-Gonzalez et al. (2020) presents an ML framework to model a variety of physical domains to solve forward and inverse problems using a graph neural network approach. Mozaffar & Cao (2021) developed a differentiable finite element simulator to control and infer material parameters within the context of additive manufacturing processes. Liang et al. (2019) developed a differentiable cloth simulator, and Ham et al. (2019) automated the calculation of weak shape derivatives within the context of finite elements to solve PDE constrained shape optimization problems. In our work, our differentiable simulator is developed specifically to aid in neuromechanical autoencoder design.
Mechanical Metamaterials As the rational design of nonlinear mechanical materials is often unintuitive, modern machine learning approaches have enabled faster design. Deng et al. (2022) coupled a neural accelerated mass spring model that facilitated an evolutionary approach to design functional structures. Mao et al. (2020) applied generative adversarial networks to design unit cells for architected metamaterials, Kumar et al. (2020) introduces a novel class of anisotropic meta-
materials and a machine learning method for the inverse design of their geometry given desired elasticity properties, Beatson et al. (2020) learned a reduced order model to speed up simulation of cellular metamaterials, Xue et al. (2020) introduced a homogenization approach for cellular metamaterials, and Xue & Mao (2022) introduced a mapped shape approach to design metamaterials to fit a prescribed strain energy curve. Our work uses classical gradient/adjoint methods to optimize the geometric parameters, but could be combined with machine learning methods to speed up simulation and hence faster NMA training.
4.1 LIMITATIONS AND FUTURE WORK
We introduce the framework of neuromechanical autoencoders, inspired by the biological coevolution of control and morphology. We present a method for automatic design of these systems, and show a number of results that produce nontrivial behavior through co-design, both in simulation and in real-world. We believe this is a small but significant step in the road to designing mechanically-intelligent systems. The two major bottlenecks in our approach is the runtime of PDE solving and geometry parameterization. Fast PDE solving, especially for nonlinear PDEs such as the ones we use, is a very active area of research, and is crucial to scaling up NMA design. In terms of geometry parameterization, the key is to find a space of materials that have a complex range of mechanical deformation properties and yet are easy to simulate. For this paper, 2D cellular solids with nonuniform pore shapes were a great ansatz, but a future work could understand and quantify how much “computation” these materials can do. Scaling up to 3-dimensional intelligent mechanical models, as well as including dynamics, would significantly improve the computation capabilities, but would require much faster solvers. This is a key focus in our further work.
4.2 ACKNOWLEDGEMENTS
We would like to thank Alex Beatson, Geoffrey Roeder, Jordan Ash, and Tianju Xue for early conversations around this work. We also thank PT Brun for assisting with fabrication. This work was partially supported by NSF grants IIS-2007278 and OAC-2118201, the NSF under grant number 2127309 to the Computing Research Association for the CIFellows 2021 Project, and a Siemens PhD fellowship.
A APPENDIX
A.1 DISCRETIZATION AND IGA
We discretize the geometric domain into P isogeometric patches. All patches use the same Bspline basis functions Bij(ξ, η) Piegl & Tiller (1997) with i, j ∈ [np] and ξ, η ∈ [0, 1], where np is the number of control points. The domain of ξ and η correspond to the parent domain of each patch (Figure 13). In the B-spline literature, the parent domain is often referred to as the knot span. The basis functions are piecewise polynomial with degree specified as a parameter. In all of our experiments, we use piecewise quadratic B-spline functions. Each B-spline basis function corresponds to a control point, and each control point represents two degrees of freedom in 2D space, i.e., each patch p ∈ [P ] has np × np × 2 degrees of freedom (xijp and yijp ). The mapping from the parent domain of each patch to the physical domain is given by a linear combination of B-spline basis functions, where the weights of the linear combination are given by the control point coordinates. Explicitly, the mapping function ϕp from parent space of patch p to physical space is given by
ϕp(ξ, η;x,y) = np∑ i=1 np∑ j=1 [ xijp yijp ] Bij(ξ, η), (1)
where [xijp , y ij p ] are the control points parameterizing the mapping. In our simulator, we represent the reference configuration with reference control points [Xijp , Y ij p ]; these are determined by our geometry parameters {r, c} through a differentiable map. We then parameterize the deformed geometry with the same basis, using deformed control points [xijp , y ij p ]. For a given deformation, the integral in Eq 3 representing the potential energy can be computed by a pullback in the parent domain, using standard Gaussian quadrature as is standard in FEM (Hughes et al., 2005).
Dirichlet boundary conditions of the type we use in this paper can be represented as constraints on a subset of the control points. For each boundary condition, the corresponding reference and deformed control points are prescribed to have particular displacement values. Furthermore, since our geometry is decomposed into multiple neighboring patches, the control points must also have incidence constraints amongst them. These are kept track of using constraint groups, where each group has a representative element.
A.2 MECHANICAL MODEL
In the static equilibrium problems we consider, the solution is a displacement that minimizes some energy function. In particular, we use a nearly incompressible Neo-Hookean material model (Ogden,
1997), in which the elastic properties are captured by a hyperelastic strain energy density function. This function, W (F),W : R2×2 → R, is independent of the path of deformation and is a function of the deformation gradient tensor, Fij = ∂ui/∂Xj + I where X ∈ D ⊂ R2 represents the position of a particle in the undeformed reference configuration, and u(X) is the displacement field. Here,
W (F) = µ
2 (I1 − 2− 2 log J) +
κ 2 (log J)2, (2)
where J = det(F), I1 = tr(FTF), and µ = E/2(1 + ν) and κ = E/3(1− 2ν) are shear and bulk moduli of a material with Young’s modulus E and Poisson’s ratio ν, respectively. This is a standard choice for hyperelastic material modeling that transfers well to the real-world. We can solve for the displacement by finding the stationary point of the potential energy functional Ψ(u),
u∗ = argmin u∈H Ψ(u) Ψ(u) = ∫ D W (F)dX = ∫ D W ( ∂u ∂X ∣∣∣∣ X=X′ + I ) dX′ (3)
where H is the set of all displacement fields that satisfy prescribed Dirichlet boundary conditions (expressed as equality constraints on the displacement field). To solve this in practice, we discretize u and define a standard representation of the geometry. Abstractly, the solution u∗ can be regarded as an implicit function of the design parameters and boundary conditions. The solution u∗ can be computed in a discretized form using standard second-order optimization algorithms, and gradients can be computed using implicit differentiation.
A.3 END-TO-END DIFFERENTIABILITY
We would like to reiterate that our differentiability conditions are satisfied, so that we can train neuromechanical autoencoders end-to-end. We first define a differentiable map from geometry parameters and Dirichlet boundary condition values into the reference B-spline control points. This is then used to construct the Πl,Πg functions, both of which are differentiable. The global vector q is then passed into a black-box optimizer to produce the solution q∗. Gradients with respect to the solution of the optimizer are computed using adjoint optimization. The solution q∗ is then mapped back into local coordinates using Πl, and is used to compute the loss function L. This pipeline ensures that we have a differentiable map from geometry parameters and boundary conditions (actuations) to the NMA task loss function, so we can proceed to train the NMA objective using stochastic gradient descent. In the next section, we demonstrate specific applications.
A.4 SOLVER DETAILS
After discretization to q, we solve the energy minimization using Newton’s method with incremental loading. The Hessian of the energy is assembled in sparse form using the trick from Powell & Toint (1979). Using the discretization of the system we automatically derive the sparsity pattern of the Hessian, and then construct appropriate binary vectors to perform Hessian-vector products with. We then reshape these into a CSR matrix representation of the Hessian.
The sparse linear systems are then solved by GMRES (Saad & Schultz, 1986) preconditioned by an incomplete LU decomposition. Since the energy involves log detF , where F is the deformation gradient, taking a finite step can lead to numerical blowup. Therefore our incremental loading is adaptive, and a line search is performed to avoid inversion of elements in the geometry.
A.5 EXPERIMENTAL DETAILS
All B-spline patches used were quadratic and contained 5× 5 control points. Quadrature was done by degree 5 Gauss-Legendre. Most computation was done on NVIDIA RTX 2080 GPUs. Each solve instance was done on a single GPU, and mini-batching was done by parallelizing with MPI. Each MPI task used a single GPU. The radii parameters were clipped to [0.1, 0.9] to aid solver stability.
A.5.1 TRANSLATION TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 2−30−30−10−2 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.2 ROTATION TASK
The learning rate was 0.001 ∗M for single and 0.01 ∗M for double rotation. M is the number of MPI tasks. In this case, we used 8 MPI tasks. The neural network was a fully-connected network with activation sizes: 1 − 20 − 10 − 1 (including input/output) or 1 − 20 − 10 − 2 for the double actuation. The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.5.3 SHAPE MATCHING TASK
The learning rate was 0.0001∗M where M is the number of MPI tasks. In this case, we used 16 MPI tasks. The neural network was a fully-connected network with activation sizes: 98−200−200−12 (including input/output). The final layer was clipped by a tanh and multiplied by a maximum displacement of 60% of cell width.
A.6 DIGITAL MNIST TASK
Initially we learned colors and actuations for a lookup table of 10 digits. This was trained with a learning rate of 0.01 ∗ M where M is the number of MPI tasks. We used 10 MPI tasks, one per digit. We clipped the maximum displacement to 60% of cell width using tanh. Afterwards, we trained a fully-connected neural network to map MNIST digits to the actuations of the corresponding digit. We then put the neural network to map directly to actuations, and finetuned end-to-end with a learning rate of 0.0001.
A.7 ADDITIONAL PORE MATCHING RESULTS
A.8 ADDITIONAL DIGITAL MNIST RESULTS | 1. What is the focus and contribution of the paper regarding optimization methods for elastic materials and neural networks?
2. What are the strengths of the proposed approach, particularly its novelty and practical applicability?
3. Do you have any concerns or questions about the explanations provided in the paper?
4. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
= Update after rebuttal =
I thank the authors for their response to my comments. I maintain my recommendation for acceptance.
= Original review =
The authors show how to optimize the shape of an elastic material, as well as neural networks to control its actuators, imn order to make the material perform various mechanical tasks.
Both the material's shape and the controlling neural network are optimized by gradient descent for various task domains. Given a target action (which, depending on the domain, might be a location for a point, a shape, or even an MNIST digit image to be classified), the neural network translates a specification of the action into adequate actuations, through which the optimized shape performs the specified action.
The authors apply their method to translation/rotation of a point on the material, controlled deformation of a shape, and - remarkably - translating MNIST images into seven-segment digit displays through deformation of a colored shape seen through slits.
The authors also show a working real-life implementation of the system, demonstrating practical applicability.
Strengths And Weaknesses
Strength:
The method is novel (to my knowledge) and interesting. While there has been much work on optimizing morphologies and controllers jointly (e.g. Karl Sims and other "virtual creatures" models - these should be cited!), as well as optimizing elastic shapes (e.g. Sodarace), the system described here looks like nothing I'm aware of.
Weaknesses:
The only minor defect I can see is that some points in the explanation could be clarified - see below
Clarity, Quality, Novelty And Reproducibility
The paper is well written throughout. Just a few clarifications:
What exactly are the "actuations" mentioned in the figures? We just see blue vertical or horizontal brackets on each side of the square, but it's not clear what the actuation actually does - I initially thought it would be squeezing along the direction of the bracket but that doesn't seem to be the case.
The purpose of section 2.3 might be explained a little bit more - what exactly are we computing here and how is it used in the system?
Caption of Figure 8 a): should the first "optimized" be "unoptimized"?
Typo in the last paragraph of p. 4, "e.g.,wel" |
ICLR | Title
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Abstract
Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (i.e. “make breakfast”), to a chosen set of actionable steps (i.e. “open fridge”). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models1.
1 INTRODUCTION
Large language models (LLMs) have made impressive advances in language generation and understanding in recent years (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). See Bommasani et al. (2021) for a recent summary of their capabilities and impacts. Being trained on large corpora of human-produced language, these models are thought to contain a lot of information about the world (Roberts et al., 2020; Li et al., 2021; BIG-bench collaboration, 2021) - albeit in linguistic form.
We ask whether we can use such knowledge contained in LLMs not just for linguistic tasks, but to make goal-driven decisions to that can be enacted in interactive, embodied environments. But we are not simply interested in whether we can train models on a dataset of demonstrations collected for some specific environment – we are instead interested in whether LLMs already contain information necessary to accomplish goals without any additional training.
More specifically, we ask whether world knowledge about how to perform high-level tasks (such as “make breakfast”) can be expanded to a series of groundable actions (such as “open fridge”, “grab milk”, “close fridge”, etc) that can be executed in the environment. For our investigation, we use recently proposed VirtualHome environment (Puig et al., 2018). It can simulate a large variety of realistic human activities in a household environment and supports ability to perform them via embodied actions defined with a verb-object syntax. However, due to open-ended nature of the tasks, it is difficult to autonomously evaluate their success. We rely on human evaluation (conducted on Mechanical Turk) to decide whether sequences of actions meaningfully accomplish posed tasks.
We find that large GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) models, when prompted with a single fixed example of a task description and its associated sequence of actions, can produce very plausible action plans for the task we’re interested in. Such completions reflect the information already stored in the model – no model fine-tuning is involved. Additionally, we only observe this
1Results and videos at https://sites.google.com/view/language-model-as-planner
effect in the larger models. Unfortunately, despite their semantic correctness, the produced action plans are often not executable in the environment. Produced actions may not map precisely to admissible actions, or may contain various linguistic ambiguities.
We propose several tools to improve executability of the model’s outputs. First, we enumerate all admissible action phrases and map the model’s output action to the most semantically-similar admissible action (we use similarity measure between sentence embeddings produced by a RoBERTa model Liu et al. (2019) in this work, but other choices are possible). Second, we use the model to autoregressively generate actions in a plan by conditioning past actions that have been made admissible via the technique above. Such on the fly correction can keep generation anchored to admissible actions. Third, we provide weak supervision to the model by prompting the model with a known task example similar to the query task. This is somewhat reminiscent of prompt tuning approaches, but does not require access to gradients or internals of the model.
Using above tools to bias model generation, we find that we improve executability of instructions from 18% to 79% (see Figure 1) without any invasive modifications to model parameters or any extra gradient or internal information beyond what is returned from the model’s forward pass. This is advantageous because it does not require any modifications to model training procedure and can fit within existing model serving pipelines. However, we do find there to be a significant drop in correctness of the instruction sequences generated with above tools (as judged by humans), indicating a promising step, but requiring more research on the topic.
To summarize, our paper’s contributions are as follows:
• We show that without any training, large language models can be prompted to generate plausible goal-driven action plans, but such plans are frequently not executable in interactive environments.
• We propose several tools to improve executability of the model generation without invasive probing or modifications to the model.
• We conduct a human evaluation of multiple techniques and models and report on the tradeoffs between executabiltiy and semantic correctness.
2 EVALUATION FRAMEWORK
Simulating open-ended tasks that resemble naturalistic human activities requires an environment to support a rich set of diverse interactions, rendering most existing embodied environments unsuitable for our investigation. One exception is VirtualHome (Puig et al., 2018), which models human activities in a typical household. Therefore, we only provide evaluation in this environment. To further measure correctness given open-ended tasks, we conduct a human evaluation. We note that since no further training is involved throughout our investigations, the observations and findings presented in this paper should also translate to similar embodied environments.
2.1 EVALUATED ENVIRONMENT: VIRTUALHOME
Preliminaries In VirtualHome, activities are expressed as programs. Each program consists of a sequence of steps, where each step is written as: [action] 〈arg1〉(id1) ... 〈argn〉(idn). Each action refers to atomic actions such as “walk”, “open”, and “put”. A total of 45 atomic actions are supported by VirtualHome. Different actions take in different numbers of arg necessary for specifying an interaction. Associated with each arg is a unique id specifying the corresponding node in the environment graph, in case of multiple instances of the same object class are present in the graph. For the sake of simplicity, we omit the id in the remaining discussions of this paper and allow automatic assignment by the environment. An example program is shown in Appendix 4.
Evaluated Tasks We use the knowledge base collected by VirtualHome for evaluation. The knowledge base contains household activities crowd-sourced from Amazon Mechanical Turk (MTurk). The MTurk workers were asked to provide natural language descriptions of daily household activities and all actionable steps necessary for completing the activities. The descriptions are both given as high-level task descriptions and step-by-step instructions. We omit the use of stepby-step instructions in this work as we desire direct extraction of executable programs from only task descriptions. For evaluations, we randomly sample a subset of 88 high-level tasks, each having one or more annotated ground-truth programs. The remaining 204 tasks are used as demonstration set, from which we are allowed to select as example(s) for prompting language models. Note that no training or fine-tuning is performed using these tasks and their annotations. More details of the evaluated tasks can be found in Appendix 8.6.
2.2 METRICS
A program that commands the agent to wander around in a household environment is highly executable but may not complete the desired task. On the other hand, a program composed of step instructions from knowledge bases can likely complete the task but cannot be executed. The reason is that free-form instructions can be ambiguous and may lack necessary common-sense actions. To this end, we consider two axes for evaluation: executability and correctness.
Executability Executability measures whether an action plan can be correctly parsed and satisfies the common-sense constraints of the environment. To be correctly parsed, an action plan must be syntatically correct and contain only allowed actions and recognizable objects. To satisfy the common-sense constraints, each action step must not violate the set of its pre-conditions (e.g. the agent cannot grab milk from the fridge before opening it) and post-conditions (e.g. the state of the fridge changes from “closed” to “open” after the agent opens it). We report the average executability across all 88 tasks and across all 7 VirtualHome scenes.
Correctness Unlike most embodied environments where the completion of a task can be easily judged, the ambiguous and multimodal nature of natural language task specification makes it impractical to obtain a gold-standard measurement of correctness. One approach could be measuring similarity of the final environment state produced by executing predicted and ground-truth programs, but VirtualHome initializes an environment differently based on the to-be-executed program, making comparisons difficult if measured in such way. Therefore, we conduct human evaluation for the highlighted methods. More details of the human evaluations can be found in Appendix 8.5. For the remaining methods and ablation studies, we rely on a match-based metric that measures how similar a generated program is to human annotations. Specifically, we follow Puig et al. (2018) and calculate the longest common subsequence (LCS) between two programs, normalized by the maximum length of the two. In the presence of multiple ground-truth programs for a single task, we take the maximum LCS across the ground-truth programs. However, we note that the majority of the tasks only have one ground-truth annotation, but there are often many plausible ways to complete a certain task, making this metric imperfect at evaluation program correctness2. Although correlation between the two is shown by Puig et al. (2018), we consider it only as a proxy metric in replacement of unscalable human evaluation.
2Although LCS has a mathematical range of [0, 1], we measure the LCS between different ground-truth programs for the same task and find an empirical maximum of 0.489.
Task: Shave Step 1: Grab razor Step 2: Switch on razor Step 3: Put razor on face
Task: Apply lotion
Pre-Trained Causal LLM Frozen
Pre-Trained Masked LLM
Frozen
Step 1: Squeeze out a glob of lotion Step 1: Pour lotion into right hand
Step 1: Squeeze out a glob of lotion Task: ShaveStep 1: Grab razor Step 2: Wash razor Step 3: Switch on razor
Task: Apply lotion Step 1: Pour lotion into right hand Step 2:
Pre-Trained Causal LLM Frozen
Zero-Shot Planning via Causal LLM Translation to Admissible Action Step-By-Step Autoregressive Generation
Prompt Prompt
Figure 2: We investigate the possibility of extracting actionable knowledge from pre-trained language models without any additional training. We first show surprising finding that large language models (LLMs) can decompose high-level tasks into sensible low-level action plans (left). To make the action plans executable, we propose to translate each step into admissible action via another LLM (middle). The translated action is appended to the original prompt used for generating the remaining steps (right).
3 METHOD
In this section, we investigate the possibility of extracting actionable knowledge from pre-trained language models without further training. We first give an overview of the common approach to query large language models (LLMs) and how it may be used for embodied agents. Then we describe an inference-time procedure that addresses several deficiencies of the LLM baseline and offers better executability in embodied environments. We break down the proposed procedure into three individual components, each discussed in Sec 3.2, 3.3, and 3.4.
Since LMs excel at dealing with natural language text instead of the specific format required by VirtualHome as described in Section 2.1, we only expose natural language text to LMs. To do this, we define a mapping for each atomic action that parses a natural language phrase to the required format. For instance, “Walk to living room” is converted to “[Walk] 〈living room〉(1)”. When an LM output cannot be parsed to any of the allowed action, the entire program is considered syntactically incorrect and thus not executable.
3.1 PROMPTING
Previous works have shown that large language models pre-trained on a colossal amount of data contain useful world knowledge that can be probed to perform various down-stream tasks (Radford et al., 2019; Brown et al., 2020). Notably, autoregressive LLMs can even perform in-context learning, an approach to solve tasks using only contextual information without gradient updates (Brown et al., 2020). Contextual information is given as part of the input prompt and LMs are asked to complete the remaining text. It often consists of natural language instructions and/or a number of examples containing the desired input/output pairs.
We adopt the same approach to query LLMs to generate action plans for high-level tasks. Specifically, we prepend one example task description sentence and its annotated action plan from the demonstration set to the query task description, as shown in Fig 2. To obtain text completion results, we sample from autoregressive LLM using temperature sampling and nucleus sampling (Holtzman et al., 2019). We refer to this LM as Planning LM and the approach using this LM for plan generation as Vanilla [LM], where [LM] is replaced by specific language model such as GPT-3 or Codex.
To further improve the quality of the generated output, we follow Chen et al. (2021) that uses LMs for program synthesis to sample multiple output for each task. However, unlike prior works in program synthesis that choose the sample with highest unit test pass rate, we only consider the setting where one sample is allowed to be evaluated for each task. This is because repetitive trialand-errors can be dangerous in the real world, and executing many action plans is equivalent to probing the environment for privileged information, which is often considered not viable.
3.2 ROBUST PARSING BY SEMANTIC TRANSLATION
One issue arises when naively following the above approach to generate action plans for high-level tasks: the action plan is often not executable because LMs are allowed to generate free-form text.
Therefore, most of the time the output cannot be mapped to one unambiguous actionable step. And many reasons can cause such failures: 1) the output does not follow pre-defined mappings of any atomic action (i.e. “I first walk to the bedroom” does not follow “Walk to 〈PLACE〉”), 2) the output may refer to atomic action and objects using words unrecognizable by the environment (i.e. “Clean the dirty dishes in the sink” where “clean” and “dirty dishes in the sink” cannot be mapped to precise action and object), 3) the output contains lexical ambiguous words (i.e. “Open TV” should instead be “Switch on TV”), or 4) the output may use disallowed action (i.e. “Microwave the cup”).
Instead of developing a set of rules to transform the free-form text into admissible action steps, we propose to again leverage world knowledge learned by large language models to semantically translate the action. For each step in the action plan â, we aim to find the most similar admissible environment action ae as measured by cosine similarity:
argmax ae f(â) · f(ae) ‖f(â)‖‖f(ae)‖ where f is an embedding function.
To embed the output text and environment actions, we use a BERT-style LM (Devlin et al., 2018; Liu et al., 2019) trained with Sentence-BERT (Reimers & Gurevych, 2019) objective because of its suitability for sentence modeling. The sentence embedding is obtained by mean-pooling the last layer hidden states across all tokens. We refer to this LM as Translation LM. Note that this is a different LM than the GPT-style Planning LM discussed in the text so far. Using a single LM for both purposes could as well be possible and likely more efficient, but we leave such investigation to future works. While the set of actions in our environment is discrete and possible to exhaustively enumerate, sampling or projection can be employed in larger discrete or continuous action spaces.
Since Translation LM can guarantee the parsed action is allowed by the environment, we can tradeoff semantic soundness of an LM step by how likely it can be mapped to an admissible action in the environment. This can be achieved by a simple modification to the scheme that we use to choose the best sample from the LM output. Instead of only using mean token log probability as a ranking metric, we choose the sample with the highest score calculated as s = C + β · logprob, where C is the cosine similarity to the closest allowed action and β is a weighting parameter.
3.3 AUTOREGRESSIVE TRAJECTORY CORRECTION
Translating each step of the program after the entire program has been synthesized is analogous to open-loop planning and is subject to compounding errors. In practice, LLMs might output compounded instructions for a single step, even though it cannot be completed using one admissible action in the environment. To this end, we can instead interleave plan generation and action translation to allow for automatic trajectory correction. At each step, we first query Planning LM to generate k samples for a single action. Then we calculate score s for each sample using Translation LM and append the translated action to the unfinished text completion. This way all subsequent steps will be conditioned on admissible actions instead of free-form text output generated by Planning LM. Furthermore, we can use Translation LM to detect out-of-distribution actions, those outside the capabilities of a robot, and terminate a program early instead of mapping to a faulty action. This can be easily implemented by setting a threshold such that if Ctmax < at step t, the program is terminated early. We empirically show this leads to better executability while maintaining similar correctness of the generated action plans.
3.4 DYNAMIC EXAMPLE SELECTION FOR IMPROVED KNOWLEDGE EXTRACTION
So far in the text, we always give the same example in the prompt for all evaluated high-level tasks. However, consider the task of “ordering pizza”. Prompting LLMs with this task may give the assumption that the agent is initialized in front of a computer, and the LLMs may guide the agent to search for a pizza store and click “checkout my cart”. Although these are reasonable and feasible in the real world, such assumption cannot always be made as these interactions may not be supported in simulated environments like VirtualHome. In fact, the closest series of actions that human experts give may be “walking to a computer”, “switching on the computer”, and “typing the keyboard”. Without being finetuned on these data, LLMs would often fail at these tasks. To provide weak supervision at inference time, we propose to use Translation LM to select the most similar task from the demonstration set to be used as the example in the prompt. Specifically, we choose the task
whose high-level description matches closely the query task as measured by cosine similarity. This allows Planning LM to reason how to perform a similar task given a human-annotated example. In our evaluations, we make sure the demonstration set is not overlapping with our test queries.
Combining the various improvement discussed above, we refer to the final approach as Translated [LM], where [LM] is replaced by specific language model used such as GPT-3 and Codex.
4 RESULTS
In this section, we first show that language models can generate sensible action plans for many highlevel tasks, even without any additional training. Then we highlight its inadequacy when naively applied to embodied environments and demonstrate how this can be improved by again leveraging world knowledge learned by LLMs. Visualization of generated programs are shown in Fig 3.
Sampling from LMs Pre-trained LMs are sensitive to sampling parameters and the specific example given in the prompt. For all evaluated methods, we perform hyper-parameter search over various sampling parameters, and for methods using fixed prompt example, we report metrics averaged across three randomly chosen examples. To select the best run for each method, we rank the runs by LCS + executability, each normalized by human-expert scores3. Further details can be found in Appendix 8.3.
Model Choices We find empirically the combination of Codex-12B and Sentence-RoBERTa355M work well in our setting. Codex and GPT-3 are accessed using OpenAI API. The remaining models are accessed through open-source packages, Hugging Face Transformers (Wolf et al., 2019) and SentenceTransformers (Reimers & Gurevych, 2019), without additional modifications.
4.1 DO LLMS CONTAIN ACTIONABLE KNOWLEDGE FOR HIGH-LEVEL TASKS?
We first investigate whether LLMs can generate sensible action plans expressed in free-form language. We use the approach described in Section 3.1 to query pre-trained LLMs. To evaluate the correctness of generated action plans, we conduct human evaluations. For each model, we ask 10 human annotators to determine – by answering “Yes” or “No” – whether each task can be completed using provided action steps. To provide a reference of how humans might rate the action plans provided by other humans, we also ask annotators to rate the ground-truth action plans provided in the VirtualHome dataset for the same set of tasks. In contrast to the free-form text output by LLMs, the ground-truth action plans from VirtualHome are generated via a graphical programming interface that enforces strict syntax, although annotators were allowed to compose necessary actions.
We show the human evaluation results in Fig 1, where y-axis shows correctness averaged across all tasks and all annotators. Surprisingly, when LLMs are large enough and without imposed syntactic constraints, they can generate highly realistic action plans whose correctness – as deemed by human annotators – even surpasses human-labeled ground-truth. Yet another interesting finding is
3See footnote 2.
that Codex outperforms GPT-3 significantly under the same number of model parameters. One hypothesis could be that by fine-tuning on structured data (docstrings and code), Codex specializes at decomposing a high-level objective into a number of basic operations, even with those operations described in natural language. We also observe some level of correctness for smaller models such as GPT-2. However, inspection of its produced output indicates that it often generates significantly shorter plans by ignoring common-sense actions or by simply rephrasing the given task (e.g. the task “Go to sleep” produces only a single step “Go to bed”). These failure modes sometimes mislead human annotators to mark them correct as the annotators may ignore common-sense actions in their judgment as well, resulting in a higher correctness rate than the quality of the output shows.
4.2 HOW EXECUTABLE ARE THE LLM ACTION PLANS?
We analyze the executability of LLM plans by evaluating them in all 7 household scenes in VirtualHome. As shown in Table 1, we find action plans generated naively by LLMs are generally not very executable. Although smaller models seem to have higher executability, we find that the majority of these executable plans are produced by ignoring the queried task and repeating the given GT example of a different task. This is validated by the fact that smaller models have lower LCS than larger models despite having higher executability, showing that this failure mode is prevalent among smaller models. In contrast, larger models do not suffer severely from this failure mode. Yet as a result of being more expressive, their generated programs are substantially less executable.
4.3 CAN LLM ACTION PLANS BE MADE EXECUTABLE BY ACTION TRANSLATION?
In this section, we evaluate the effectiveness of our proposed procedure of action translation. We first create a bank of all allowed 47522 action steps in the environment, including all possible combinations of atomic actions and allowed arguments/objects. Then we use an off-the-shelf SentenceRoBERTa (Liu et al., 2019; Reimers & Gurevych, 2019) as Translation LM to create embeddings for actions and output text. For better computational efficiency, we pre-compute the embeddings for all allowed actions, leaving minor computation overhead for our procedure over the baseline methods at inference time. As shown in Table 1, executability of generated programs is significantly improved. Furthermore, we also observe improved LCS because the translated action steps precisely follow the program syntax and thus are more similar to ground-truth annotations. One sample output is shown in Fig 1 and a larger random subset of generated samples can be found in Appendix 8.7.
To validate their correctness, we again perform human studies using the same procedure in Sec 4.1. Results are shown in Table 1. We find that despite being more similar to GT, the programs are deemed less correct by humans. By examining the generated output, we observe two main sources of errors. First, we find Translation LM is poor at mapping compounded instructions to a succinct
admissible action. This is partly due to that Translation LM is trained on a much smaller dataset and contains much a smaller number of parameters, so we expect further improvement by using a larger pre-trained model for translation. The second source of error comes from imperfect expressivity of the environment; we find that for many tasks we evaluate, certain necessary actions or objects are not implemented. This is also reflected by out human evaluation results of the GT programs, as only half of the programs are considered complete by the human annotators.
5 ANALYSIS AND DISCUSSIONS
5.1 ABLATION OF DESIGN DECISIONS Methods Executability LCS
Translated Codex 12B 78.57% 24.72% - w/o Action Translation 31.49% 22.53% - w/o Dynamic Example 50.86% 22.84% - w/o Iterative 55.19% 24.43%
Table 2: Ablation of three proposed techniques.
We perform ablation studies to show the effectiveness and necessity of three components of our proposed procedure, each described Sec 3.2, 3.3, and 3.4. As shown in Table 2, leaving out any of the three components would all
lead to decreased performance in both executability and LCS. Notably, not doing action translation leads to the most significant executability drop, showing the importance of action translation in extracting executable action plans from LLMs.
5.2 CAN LLMS GENERATE ACTIONABLE PROGRAMS BY FOLLOWING DETAILED INSTRUCTIONS?
Prior works often focus on translating step-by-step instructions into executable programs. We evaluate LLMs under this setting using a prompt format shown in Appendix 8.2. Although this setting is easier as it does not require rich actionable knowledge, detailed instructions can help resolve much ambiguity of exactly how to perform a high-level task when multiple solutions are possible. Therefore, Translated Codex 12B achieves executability of 78.57% and LCS of 32.87%, where LCS sees a considerable bump from the setting without detailed instructions. Surprisingly, the LCS result is very close to that of a supervised LSTM (Hochreiter & Schmidhuber, 1997) baseline from VirtualHome trained on human-annotated data, which is at 34.00%. Note that since code to train the baseline and the specific train/test split is not publicly released, we only show results reported in Puig et al. (2018) as a reference. We also cannot compare executability as it is not reported.
5.3 IS ACTION TRANSLATION NECESSARY FOR ACTIONABLE KNOWLEDGE GROUNDING?
The investigations of this paper are two-fold: 1) Is actionable knowledge present in LLMs? 2) Can we ground this actionable knowledge in interactive environment? In this section, we focus our attention on the second question by conditioning on the assumption that first question is true. To do this, since successful execution of correct action plans directly measures grounding, we select only the correct plans generated by LLMs and measure how executable they are. We deem an action plan to be correct if 70% or more human annotators decide it is correct.
As shown by Table 3, when an LM is not large enough (e.g. GPT-2), not only it contains little actionable knowledge, but this knowledge cannot be grounded at all. GPT-3 and Codex, on the other hand, can generate highly correct action plans in free-form language. However, they do not have the capability to ground their actionable knowledge in interactive environments. What’s more interesting, by comparing GPT-3 of both 12B parameters and 175B parameters, ratio of executable plans does not improve with the parameter count. This shows that simply training larger models does not necessarily lead to better knowledge grounding. In the meantime, action translation offers a promising way towards grounding actionable knowledge by producing highly executable plans. However, we again note that it comes at a trade-off of producing less correct plans as compared to its vanilla counterpart, and we hope to see future endeavors for bridging the gap.
6 RELATED WORKS
Large-scale natural language modeling has witnessed rapid advances since the inception of the Transformer architecture (Vaswani et al., 2017). It has been shown by recent works that large language models (LLMs) pre-trained on large unstructured text corpus not only can perform strongly on various down-stream NLP tasks (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) but also can internalize an implicit knowledge base containing rich information about the world (Petroni et al., 2019; Jiang et al., 2020; Davison et al., 2019; Talmor et al., 2020; Roberts et al., 2020). Furthermore, the learned representations can be used to model relations of entities (Li et al., 2021), retrieve matching visual features (Ilharco et al., 2020), and even as valuable priors when applied to diverse tasks from different modalities (Lu et al., 2021; Tsimpoukelli et al., 2021). Compared to prior works in knowledge extraction that extract single-step factual answers memorized by the models (e.g. “Dante was born in [PLACE]”), we aim to extract sequential action plans to complete an open-ended human activity (e.g. “make breakfast”). We further require these plans to only contain allowed actions and satisfy the pre/post-conditions of actions in order to be executed by an embodied agent.
At the same time, there has also been growing interest and development in grounding language in embodied environment. A series of prior works have investigated the possibility of parsing language instructions into formal logic to resolve various linguistic ambiguities for embodied agents (Artzi & Zettlemoyer, 2013; Misra et al., 2015; Tenorth et al., 2010). However, they often scale poorly to complex tasks and environments. Recently, more research efforts have been put into creating better and more realistic environments with the goal to further advances in this area (Puig et al., 2018; Shridhar et al., 2020a;b; Kolve et al., 2017; Savva et al., 2019). At the same time, by leveraging the better representation power of neural architectures, a number of works have looked into creating instruction-following agents that can perform manipulation (Lynch & Sermanet, 2020), navigation (Majumdar et al., 2020), or both (Suglia et al., 2021; Hill et al., 2020).
Notably, most of these prior works do not leverage full-blown pre-trained LLMs (Suglia et al., 2021) or do not scale to complex human activities (Hill et al., 2020; Lynch & Sermanet, 2020). Perhaps more importantly, few works have evaluated LLMs in an embodiment setting that realizes the full potential of the world knowledge these models contain: the tasks evaluated are often “pick”, “grab”, “open”, and etc, which do not resemble the highly diverse activities that humans perform in daily lives. The development of VirtualHome environment Puig et al. (2018) enables such possibility. However, relevant works (Puig et al., 2020; Liao et al., 2019) rely on human-annotated data and perform supervised training from scratch. Due to the lack of rich world knowledge, these models can only generate action plans given step-by-step instructions of how to act or video demonstrations. In this work, we take a step further by conditioning only on the high-level descriptions and by extracting executable action plans from LLMs without any additional training.
7 LIMITATIONS AND CONCLUSION
There are several notable limitations of this work. First, although our approach presents a viable way to ground world knowledge in embodied environments, it is still a trade-off rather than one best solution since we observe considerable drop in correctness. Second, we focus on high-level to midlevel grounding, assuming there is a controller that can execute mid-level tasks (such as “grab cup”). Our work does not investigate usefulness of LLMs for low-level sensorimotor behavior grounding. The third limitation is that we do not incorporate observation context or feedback into our models. To some extent, we approach LLMs in the same way as how VirtualHome asks human annotators to give action plans for a given huamn activity by imagination, in which case the human-generated action plans also do not incorporate observation context. However, we do see incorporating observation context for complex activities as an exciting future direction.
8 APPENDIX
8.1 EXAMPLE PROGRAM IN VIRTUALHOME
8.2 EXAMPLE PROMPT CONTAINING STEP-BY-STEP INSTRUCTIONS
8.3 HYPERPARAMETER SEARCH
For each evaluated method, we perform grid search over the following hyperparameters:
Name Description Search Values epsilon ( ) OOD cutoff threshold used in iterative action
translation {0, 0.4, 0.8}
temperature sampling parameter adjusting relative probabilities across tokens {0.1, 0.3, 0.6}
k number of samples generated when querying LMs each time {1, 10}
frequence penalty OpenAI API specific; penalize new tokens based on their existing frequency in the text so far {0.1, 0.3, 0.6, 0.9}
presence penalty OpenAI API specific; penalize new tokens based on whether they appear in the text so far {0.3, 0.5, 0.8}
repetition penalty Hugging Face Transformers specific; penalize new tokens based on whether they are repeating existing text {1.0, 1.2, 1.5, 1.8}
For methods that use fixed example across evaluated tasks, we search over the following three randomly chosen examples:
Example 1 Example 2 Example 3 Task: Use computer Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Relax on sofa Step 1: Walk to home office Step 2: Walk to couch Step 3: Find couch Step 4: Sit on couch Step 5: Find pillow Step 6: Lie on couch Task: Read book Step 1: Walk to home office Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find chair Step 6: Sit on chair Step 7: Read novel
8.4 ANALYSIS OF PROGRAM LENGTH
Shorter programs have a natural advantage of being more executable. Consider a task “wash hands” and a corresponding program that only commands an agent to “go to bathroom” without additional steps. The program is obviously incorrect yet trivially executable. To validate our approach does not simply generate very short program, we calculate the average program length across the 88 evaluated tasks. Results are shown in Table 6. In addition to the failure mode discussed in Section 4.2 that leads to incorrect yet executable programs, smaller LMs such as GPT-2 also generate programs significantly shorter than larger models, making them more executable. In contrast, larger models like Codex-12B generate more expressive program of high correctness, but they often suffer from executability. We show action translation can lead to benefits of both worlds, generating programs that are highly executable while maintaining similar expressiveness in terms of program length.
8.5 DETAILS OF HUMAN EVALUATIONS
Human evaluations are conducted on Amazon Mechanical Turk. For each method, we generate action plans for all 88 high-level tasks. To account for expressivity of the VirtualHome environment (Puig et al., 2018), we further include ground-truth action plans from the VirtualHome dataset in our human evaluations. The human evaluations are conducted in the form of questionnaires containing all action plans with unknown corresponding methods. The questionnaire contains the following instructions at the top:
For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps. In other words, can the task be decomposed into these steps? Note that simply re-stating the task does not mean completing it.
Human annotators are required to answer all the questions in the questionnaire, where each question is an action plan generated by a method unknown to the annotator. The order of the questions is randomly permuted before presented to each annotator. For each question, the annotators need to answer either “Yes” or “No” indicating if they believe the action plan completes the task. For each method, we report correctness percentage averaged across 10 participated human annotators.
8.6 DETAILS OF VIRTUALHOME TASKS
VirtualHome ActivityPrograms is a knowledge base collected on Mechanical Turk by Puig et al. (2018). The knowledge base features real household activities of humans and correspondingly sequences of basic actions for robots to perform these activities. Each entry contains a high-level task name (i.e. “Watch TV”), a mid-level detailed description (i.e. “Sit on my couch directly opposite my TV, switch on my TV with the remote control and watch”), and a sequence of allowed actions (a.k.a. a program) to perform the task in VirtualHome. There are 292 distinct high-level tasks and a total of 2821 different entries in the knowledge base, where each high-level task may correspond one or several programs. The knowledge base covers a diverse set of real human activities and a wide range of different actions and objects. Puig et al. (2018) also conduct additional human evaluation on 100 randomly sampled programs and show that 64% of them are complete (i.e. contain all necessary steps). Readers are encouraged to refer to Puig et al. (2018) for more comprehensive analysis of the dataset.
Train/Test Split From all 292 distinct high-level tasks from the ActivityPrograms knowledge base, we randomly sample 88 high-level tasks for evaluation. The remaining 204 tasks are used as demonstration set from which we are allowed to select as example(s) for prompting language models, or in the case of supervised fine-tuning, they are used to fine-tune pre-trained language models.
8.7 RANDOM SAMPLES OF GT, VANILLA CODEX, AND TRANSLATED CODEX
Annotated Ground-Truth Codex Translated Codex (Ours) Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Turn to bed Step 5: Look at bed Step 6: Sit on bed Step 7: Sleep Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find pillow Step 6: Sleep on pillow Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Lie on bed Step 4: Sleep
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer
Task: Listen to music Step 1: Walk to bedroom Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Pull chair Step 9: Sit on chair Step 10: Find headset Step 11: Grab headset Step 12: Put on headset Step 13: Switch on computer Step 14: Find keyboard Step 15: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to computer Step 3: Sit on chair Step 4: Switch on computer Step 5: Find music stand
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to tooth paste Step 3: Find tooth paste Step 4: Grab tooth paste Step 5: Find toothbrush Step 6: Grab toothbrush
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to sink Step 3: Find toothbrush Step 4: Pick up toothbrush Step 5: Put toothbrush in mouth Step 6: Move brush around mouth for two minutes Step 7: Spit out toothpaste and brush into sink Step 8: Turn on water in sink and rinse brush for one minute Step 9: Turn off water in sink and return brush to cupboard Task: Brush teeth Step 1: Walk to bathroom Step 2: Open door Step 3: Walk to sink Step 4: Put pot on sink Step 5: Put brush on toothbrush Step 6: Turn to toothpaste Step 7: Put toothpaste on toothbrush Step 8: Put teeth on toothbrush
Task: Dry soap bottles Step 1: Find towel Step 2: Find soap Step 3: Grab soap Step 4: Wipe soap Step 5: Put back soap Step 6: Find shampoo Step 7: Grab shampoo Step 8: Wipe shampoo Step 9: Put back shampoo Step 10: Find shampoo Step 11: Grab shampoo Step 12: Wipe shampoo Step 13: Put back shampoo
Task: Dry soap bottles Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on water tap Step 5: Wait for water to be hot enough Step 6: Fill container with hot water Step 7: Find soap bottles Step 8: Pour soap into container with hot water Step 9: Wait for soap to dissolve in water Step 10: Turn off tap and empty container into sink Task: Dry soap bottles Step 1: Walk to bathroom Step 2: Walk to shower Step 3: Find soap
Task: Vacuum Step 1: Walk to home office Step 2: Walk to computer Step 3: Find vacuum cleaner Step 4: Switch on vacuum cleaner Step 5: Switch off vacuum cleaner Task: Vacuum Step 1: Walk to hallway Step 2: Walk to closet Step 3: Find closet Step 4: Open closet Step 5: Find vacuum cleaner Step 6: Turn on vacuum cleaner Step 7: Walk to living room Step 8: Vacuum floor in living room Task: Vacuum Step 1: Walk to home office Step 2: Walk to dresser Step 3: Open dresser Step 4: Find vacuum cleaner Step 5: Grab vacuum cleaner Step 6: Pull vacuum cleaner Step 7: Plug in vacuum cleaner Step 8: Switch on vacuum cleaner Step 9: Pull vacuum cleaner Step 10: Push vacuum cleaner
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to bookshelf Step 3: Turn to bookshelf Step 4: Look at bookshelf Step 5: Turn to bookshelf Step 6: Look at bookshelf Step 7: Turn to bookshelf Step 8: Look at bookshelf Step 9: Turn to bookshelf Step 10: Look at bookshelf Step 11: Turn to bookshelf Step 12: Look at bookshelf Step 13: Turn to bookshelf Step 14: Look at bookshelf Step 15: Find novel Step 16: Grab novel
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find dictionary Step 6: Open dictionary Step 7: Look at dictionary
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to book Step 3: Find textbook
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Switch off computer Step 7: Find brush Step 8: Turn to brush Step 9: Look at brush Step 10: Grab brush Step 11: Wipe computer Step 12: Drop brush Step 13: Find napkin Step 14: Grab napkin Step 15: Wash napkin Step 16: Wipe computer
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Wash monitor Step 1: Walk to check Step 2: Walk to desk
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Open newspaper Step 8: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Open newspaper Step 7: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Read newspaper
Task: Apply lotion Step 1: Find bed Step 2: Turn to bed Step 3: Point at bed Step 4: Wash bed Step 5: Turn to bed Step 6: Look at bed Step 7: Push bed Step 8: Sit on bed
Task: Apply lotion Step 1: Walk to bathroom Step 2: Walk to cabinet Step 3: Open cabinet Step 4: Find lotion Step 5: Apply lotion on hands Step 6: Rub hands together for 10 seconds Step 7: Apply lotion on face Task: Apply lotion Step 1: Walk to bathroom Step 2: Turn to lotionbottle Step 3: Grab lotionbottle
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find desk Step 6: Open desk Step 7: Find check Step 8: Grab check Step 9: Put check on desk Step 10: Find pencil Step 11: Grab pencil Step 12: Close desk
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Draw picture Step 1: Walk to home office Step 2: Walk to wall Step 3: Walk to drawing Step 4: Find drawing Step 5: Turn to drawing Step 6: Look at drawing Step 7: Touch drawing Step 8: Pull drawing Step 9: Push drawing
Task: Prepare pot of boiling water Step 1: Walk to dining room Step 2: Walk to pot Step 3: Find pot Step 4: Grab pot Step 5: Walk to sink Step 6: Find faucet Step 7: Switch on faucet Step 8: Put pot on sink Step 9: Grab pot Step 10: Switch off faucet Step 11: Grab pot Step 12: Walk to oven Step 13: Put pot on oven Step 14: Switch on oven Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on tap Step 5: Fill kettle with water Step 6: Put kettle on stove Step 7: Turn on stove Step 8: Wait for water to boil Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to cabinet Step 3: Find pot Step 4: Grab pot Step 5: Walk to stove Step 6: Find stove Step 7: Switch on stove
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Look at mirror
Task: Look at mirror Step 1: Walk to bedroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Wipe down sink Step 1: Walk to dining room Step 2: Walk to rag Step 3: Find rag Step 4: Grab rag Step 5: Walk to sink Step 6: Wipe sink Step 7: Put back rag
Task: Wipe down sink Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Wipe down sink Step 5: Return to kitchen
Task: Wipe down sink Step 1: Walk to bathroom Step 2: Walk to cleaning solution Step 3: Find cleaning solution Step 4: Grab cleaning solution Step 5: Find sink Step 6: Pour cleaning solution into glasses Step 7: Find faucet Step 8: Switch on faucet Step 9: Scrub sink Step 10: Wash sink
Task: Complete surveys on amazon turk Step 1: Walk to chair Step 2: Sit on chair Step 3: Find computer Step 4: Switch on computer Step 5: Turn to computer Step 6: Look at computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer Step 11: Find check Step 12: Grab check Step 13: Read check Step 14: Put back mouse Step 15: Find keyboard Step 16: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer
Task: Write book Step 1: Walk to home office Step 2: Walk to laptop Step 3: Find laptop Step 4: Grab laptop Step 5: Find electrical outlet Step 6: Plug in laptop Step 7: Walk to love seat Step 8: Sit on love seat Step 9: Switch on laptop Step 10: Find keyboard Step 11: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to light Step 3: Find light Step 4: Switch on light Step 5: Find pen Step 6: Grab pen Step 7: Find chair Step 8: Sit on chair Step 9: Turn to novel
Task: Paint ceiling Step 1: Find drawing Step 2: Turn to drawing Step 3: Point at drawing Step 4: Watch drawing Step 5: Touch drawing Step 6: Walk to bedroom
Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find paint can Step 6: Open paint can Step 7: Pour paint into paint can Step 8: Close paint can Step 9: Find roller Step 10: Roll paint onto ceiling Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to ceiling Step 3: Walk to painting Step 4: Find painting Step 5: Turn to painting Step 6: Look at painting Step 7: Touch painting Step 8: Pull painting Step 9: Push painting
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Open dresser Step 4: Find dress Step 5: Grab dress Step 6: Put on dress Step 7: Close dresser
Task: Change clothes Step 1: Walk to closet Step 2: Open closet Step 3: Find clothes Step 4: Put on clothes Step 5: Close closet
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find dresser Step 4: Open dresser Step 5: Find pants Step 6: Grab pants Step 7: Put on pants Step 8: Find shirt Step 9: Grab shirt Step 10: Put on shirt Step 11: Close dresser
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find sheets Step 4: Grab sheets Step 5: Pull sheets
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find sheet Step 6: Pick up sheet Step 7: Find pillow Step 8: Pick up pillow Step 9: Find blanket Step 10: Pick up blanket
Task: Make bed Step 1: Stand up Step 2: Walk to bedroom Step 3: Walk to bed Step 4: Turn to bed Step 5: Put sheets on bed
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find bed Step 6: Sit on bed Step 7: Find bookmark Step 8: Grab bookmark Step 9: Read novel Step 10: Put bookmark on novel Step 11: Sleep Task: Read yourself to sleep Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to book Step 3: Sit on chair Step 4: Read check Step 5: Sleep
Task: Hang keys Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find purse Step 4: Grab purse Step 5: Open purse Step 6: Find keys Step 7: Grab keys Step 8: Find bowl Step 9: Put keys on bowl Step 10: Close purse Step 11: Put back purse
Task: Hang keys Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Hang keys Step 1: Walk to home office Step 2: Walk to table Step 3: Find keys Step 4: Grab keys Step 5: Walk to wall Step 6: Find hanger Step 7: Put keys on hanger
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to freezer Step 3: Open freezer Step 4: Find cheese Step 5: Grab cheese Step 6: Eat cheese
Task: Eat cheese Step 1: Walk to fridge Step 2: Open fridge Step 3: Find cheese Step 4: Pick up cheese Step 5: Walk to table Step 6: Place cheese on table Step 7: Close fridge Step 8: Walk to chair Step 9: Sit on chair Step 10: Eat cheese
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to table Step 3: Find table Step 4: Turn to table Step 5: Find chair Step 6: Sit on chair Step 7: Find food Step 8: Grab food Step 9: Find plate Step 10: Put food on plate | 1. What is the main contribution of the paper regarding utilizing pre-trained large language models for embodied agents?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its application to different environments and the gap between text instructions and actual agent actions?
3. How does the reviewer assess the correctness and executability of the generated instructions, and what are the challenges in improving their executability while preserving their correctness?
4. What are some of the technical details that the reviewer feels are missing from the paper, such as the calculation of executability and the sampling process for LMs?
5. Are there any formatting issues or incorrect references in the paper that need to be addressed? | Summary Of The Paper
Review | Summary Of The Paper
This paper takes advantage of the pre-trained large language models (LLMs) and assesses whether they can be helpful for embodied agents. The authors choose the VirtualHome environment as the testbed and prompt the LLMs to decompose a high-level instruction to detailed, actionable instructions. Human evaluation is conducted to evaluate the correctness and executability of the generated instructions.
Review
Strengths:
LLMs are powerful tools and learn knowledge that we may even not know. So prompting them to utilize the learned knowledge is a promising research direction. Prompting LLMs for actionable knowledge extraction is new and interesting.
Different LLMs including GPT-3 and Codex are used and evaluated here.
The prompt and translation method is reasonable on the VirtualHome environment.
Weaknesses:
The paper's claim is to extract actionable knowledge for embodied agents, but in fact, it seems the translation process is particularly tailored for VirtualHome. I couldn't see a clear way to generalize to other embodied environments. It would be great if the authors can show its generalization to other environments such as Room2Room and ALFRED. I am not sure if the translation part is really necessary in other environments.
Moreover, it seems there is no embodied agent to execute those instructions. All experiments are conducted on the text level only, but we all know that there is a huge gap between text instructions and actual actions taken by the agent. So it would be much more convincing if the authors can show an embodied agent can actually follow the generated instructions and complete tasks in the environment.
There seems to be a tradeoff between executability and correctness for the current approach. The correctness is significantly dropped from 56% to 34% for Translated Codex 12B. Why are some correctness numbers missing in Table 1? What is the correctness of Translated GPT-3 175B? The Correctness metric is also missing in Table 2 and Table 3. Why?
It is actually pretty easy to improve the executability by finding the most similar actions from the action set of the environment. The real challenge is to make the actions more executable while preserving their correctness, which, however, isn't achieved in this work and it quite a pity. Without considering the correctness, executability does not really matter much, for an extreme example, a random agent may achieve high executability but low correctness.
Many technical details are missing in the paper, for example,
It is unclear how Executability is calculated. There is only high-level language description in the paper. Can you elaborate it?
It is unclear how the sampling for LMs is done. Do you sample multiple ones and then rely on human evaluation to select the best one? This doesn't sound right and scalable.
Section 2.1: Appendix ?? points to nowhere. Similar issues exist in many places of the paper.
"" in the paper is in the wrong format. |
ICLR | Title
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Abstract
Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (i.e. “make breakfast”), to a chosen set of actionable steps (i.e. “open fridge”). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models1.
1 INTRODUCTION
Large language models (LLMs) have made impressive advances in language generation and understanding in recent years (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). See Bommasani et al. (2021) for a recent summary of their capabilities and impacts. Being trained on large corpora of human-produced language, these models are thought to contain a lot of information about the world (Roberts et al., 2020; Li et al., 2021; BIG-bench collaboration, 2021) - albeit in linguistic form.
We ask whether we can use such knowledge contained in LLMs not just for linguistic tasks, but to make goal-driven decisions to that can be enacted in interactive, embodied environments. But we are not simply interested in whether we can train models on a dataset of demonstrations collected for some specific environment – we are instead interested in whether LLMs already contain information necessary to accomplish goals without any additional training.
More specifically, we ask whether world knowledge about how to perform high-level tasks (such as “make breakfast”) can be expanded to a series of groundable actions (such as “open fridge”, “grab milk”, “close fridge”, etc) that can be executed in the environment. For our investigation, we use recently proposed VirtualHome environment (Puig et al., 2018). It can simulate a large variety of realistic human activities in a household environment and supports ability to perform them via embodied actions defined with a verb-object syntax. However, due to open-ended nature of the tasks, it is difficult to autonomously evaluate their success. We rely on human evaluation (conducted on Mechanical Turk) to decide whether sequences of actions meaningfully accomplish posed tasks.
We find that large GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) models, when prompted with a single fixed example of a task description and its associated sequence of actions, can produce very plausible action plans for the task we’re interested in. Such completions reflect the information already stored in the model – no model fine-tuning is involved. Additionally, we only observe this
1Results and videos at https://sites.google.com/view/language-model-as-planner
effect in the larger models. Unfortunately, despite their semantic correctness, the produced action plans are often not executable in the environment. Produced actions may not map precisely to admissible actions, or may contain various linguistic ambiguities.
We propose several tools to improve executability of the model’s outputs. First, we enumerate all admissible action phrases and map the model’s output action to the most semantically-similar admissible action (we use similarity measure between sentence embeddings produced by a RoBERTa model Liu et al. (2019) in this work, but other choices are possible). Second, we use the model to autoregressively generate actions in a plan by conditioning past actions that have been made admissible via the technique above. Such on the fly correction can keep generation anchored to admissible actions. Third, we provide weak supervision to the model by prompting the model with a known task example similar to the query task. This is somewhat reminiscent of prompt tuning approaches, but does not require access to gradients or internals of the model.
Using above tools to bias model generation, we find that we improve executability of instructions from 18% to 79% (see Figure 1) without any invasive modifications to model parameters or any extra gradient or internal information beyond what is returned from the model’s forward pass. This is advantageous because it does not require any modifications to model training procedure and can fit within existing model serving pipelines. However, we do find there to be a significant drop in correctness of the instruction sequences generated with above tools (as judged by humans), indicating a promising step, but requiring more research on the topic.
To summarize, our paper’s contributions are as follows:
• We show that without any training, large language models can be prompted to generate plausible goal-driven action plans, but such plans are frequently not executable in interactive environments.
• We propose several tools to improve executability of the model generation without invasive probing or modifications to the model.
• We conduct a human evaluation of multiple techniques and models and report on the tradeoffs between executabiltiy and semantic correctness.
2 EVALUATION FRAMEWORK
Simulating open-ended tasks that resemble naturalistic human activities requires an environment to support a rich set of diverse interactions, rendering most existing embodied environments unsuitable for our investigation. One exception is VirtualHome (Puig et al., 2018), which models human activities in a typical household. Therefore, we only provide evaluation in this environment. To further measure correctness given open-ended tasks, we conduct a human evaluation. We note that since no further training is involved throughout our investigations, the observations and findings presented in this paper should also translate to similar embodied environments.
2.1 EVALUATED ENVIRONMENT: VIRTUALHOME
Preliminaries In VirtualHome, activities are expressed as programs. Each program consists of a sequence of steps, where each step is written as: [action] 〈arg1〉(id1) ... 〈argn〉(idn). Each action refers to atomic actions such as “walk”, “open”, and “put”. A total of 45 atomic actions are supported by VirtualHome. Different actions take in different numbers of arg necessary for specifying an interaction. Associated with each arg is a unique id specifying the corresponding node in the environment graph, in case of multiple instances of the same object class are present in the graph. For the sake of simplicity, we omit the id in the remaining discussions of this paper and allow automatic assignment by the environment. An example program is shown in Appendix 4.
Evaluated Tasks We use the knowledge base collected by VirtualHome for evaluation. The knowledge base contains household activities crowd-sourced from Amazon Mechanical Turk (MTurk). The MTurk workers were asked to provide natural language descriptions of daily household activities and all actionable steps necessary for completing the activities. The descriptions are both given as high-level task descriptions and step-by-step instructions. We omit the use of stepby-step instructions in this work as we desire direct extraction of executable programs from only task descriptions. For evaluations, we randomly sample a subset of 88 high-level tasks, each having one or more annotated ground-truth programs. The remaining 204 tasks are used as demonstration set, from which we are allowed to select as example(s) for prompting language models. Note that no training or fine-tuning is performed using these tasks and their annotations. More details of the evaluated tasks can be found in Appendix 8.6.
2.2 METRICS
A program that commands the agent to wander around in a household environment is highly executable but may not complete the desired task. On the other hand, a program composed of step instructions from knowledge bases can likely complete the task but cannot be executed. The reason is that free-form instructions can be ambiguous and may lack necessary common-sense actions. To this end, we consider two axes for evaluation: executability and correctness.
Executability Executability measures whether an action plan can be correctly parsed and satisfies the common-sense constraints of the environment. To be correctly parsed, an action plan must be syntatically correct and contain only allowed actions and recognizable objects. To satisfy the common-sense constraints, each action step must not violate the set of its pre-conditions (e.g. the agent cannot grab milk from the fridge before opening it) and post-conditions (e.g. the state of the fridge changes from “closed” to “open” after the agent opens it). We report the average executability across all 88 tasks and across all 7 VirtualHome scenes.
Correctness Unlike most embodied environments where the completion of a task can be easily judged, the ambiguous and multimodal nature of natural language task specification makes it impractical to obtain a gold-standard measurement of correctness. One approach could be measuring similarity of the final environment state produced by executing predicted and ground-truth programs, but VirtualHome initializes an environment differently based on the to-be-executed program, making comparisons difficult if measured in such way. Therefore, we conduct human evaluation for the highlighted methods. More details of the human evaluations can be found in Appendix 8.5. For the remaining methods and ablation studies, we rely on a match-based metric that measures how similar a generated program is to human annotations. Specifically, we follow Puig et al. (2018) and calculate the longest common subsequence (LCS) between two programs, normalized by the maximum length of the two. In the presence of multiple ground-truth programs for a single task, we take the maximum LCS across the ground-truth programs. However, we note that the majority of the tasks only have one ground-truth annotation, but there are often many plausible ways to complete a certain task, making this metric imperfect at evaluation program correctness2. Although correlation between the two is shown by Puig et al. (2018), we consider it only as a proxy metric in replacement of unscalable human evaluation.
2Although LCS has a mathematical range of [0, 1], we measure the LCS between different ground-truth programs for the same task and find an empirical maximum of 0.489.
Task: Shave Step 1: Grab razor Step 2: Switch on razor Step 3: Put razor on face
Task: Apply lotion
Pre-Trained Causal LLM Frozen
Pre-Trained Masked LLM
Frozen
Step 1: Squeeze out a glob of lotion Step 1: Pour lotion into right hand
Step 1: Squeeze out a glob of lotion Task: ShaveStep 1: Grab razor Step 2: Wash razor Step 3: Switch on razor
Task: Apply lotion Step 1: Pour lotion into right hand Step 2:
Pre-Trained Causal LLM Frozen
Zero-Shot Planning via Causal LLM Translation to Admissible Action Step-By-Step Autoregressive Generation
Prompt Prompt
Figure 2: We investigate the possibility of extracting actionable knowledge from pre-trained language models without any additional training. We first show surprising finding that large language models (LLMs) can decompose high-level tasks into sensible low-level action plans (left). To make the action plans executable, we propose to translate each step into admissible action via another LLM (middle). The translated action is appended to the original prompt used for generating the remaining steps (right).
3 METHOD
In this section, we investigate the possibility of extracting actionable knowledge from pre-trained language models without further training. We first give an overview of the common approach to query large language models (LLMs) and how it may be used for embodied agents. Then we describe an inference-time procedure that addresses several deficiencies of the LLM baseline and offers better executability in embodied environments. We break down the proposed procedure into three individual components, each discussed in Sec 3.2, 3.3, and 3.4.
Since LMs excel at dealing with natural language text instead of the specific format required by VirtualHome as described in Section 2.1, we only expose natural language text to LMs. To do this, we define a mapping for each atomic action that parses a natural language phrase to the required format. For instance, “Walk to living room” is converted to “[Walk] 〈living room〉(1)”. When an LM output cannot be parsed to any of the allowed action, the entire program is considered syntactically incorrect and thus not executable.
3.1 PROMPTING
Previous works have shown that large language models pre-trained on a colossal amount of data contain useful world knowledge that can be probed to perform various down-stream tasks (Radford et al., 2019; Brown et al., 2020). Notably, autoregressive LLMs can even perform in-context learning, an approach to solve tasks using only contextual information without gradient updates (Brown et al., 2020). Contextual information is given as part of the input prompt and LMs are asked to complete the remaining text. It often consists of natural language instructions and/or a number of examples containing the desired input/output pairs.
We adopt the same approach to query LLMs to generate action plans for high-level tasks. Specifically, we prepend one example task description sentence and its annotated action plan from the demonstration set to the query task description, as shown in Fig 2. To obtain text completion results, we sample from autoregressive LLM using temperature sampling and nucleus sampling (Holtzman et al., 2019). We refer to this LM as Planning LM and the approach using this LM for plan generation as Vanilla [LM], where [LM] is replaced by specific language model such as GPT-3 or Codex.
To further improve the quality of the generated output, we follow Chen et al. (2021) that uses LMs for program synthesis to sample multiple output for each task. However, unlike prior works in program synthesis that choose the sample with highest unit test pass rate, we only consider the setting where one sample is allowed to be evaluated for each task. This is because repetitive trialand-errors can be dangerous in the real world, and executing many action plans is equivalent to probing the environment for privileged information, which is often considered not viable.
3.2 ROBUST PARSING BY SEMANTIC TRANSLATION
One issue arises when naively following the above approach to generate action plans for high-level tasks: the action plan is often not executable because LMs are allowed to generate free-form text.
Therefore, most of the time the output cannot be mapped to one unambiguous actionable step. And many reasons can cause such failures: 1) the output does not follow pre-defined mappings of any atomic action (i.e. “I first walk to the bedroom” does not follow “Walk to 〈PLACE〉”), 2) the output may refer to atomic action and objects using words unrecognizable by the environment (i.e. “Clean the dirty dishes in the sink” where “clean” and “dirty dishes in the sink” cannot be mapped to precise action and object), 3) the output contains lexical ambiguous words (i.e. “Open TV” should instead be “Switch on TV”), or 4) the output may use disallowed action (i.e. “Microwave the cup”).
Instead of developing a set of rules to transform the free-form text into admissible action steps, we propose to again leverage world knowledge learned by large language models to semantically translate the action. For each step in the action plan â, we aim to find the most similar admissible environment action ae as measured by cosine similarity:
argmax ae f(â) · f(ae) ‖f(â)‖‖f(ae)‖ where f is an embedding function.
To embed the output text and environment actions, we use a BERT-style LM (Devlin et al., 2018; Liu et al., 2019) trained with Sentence-BERT (Reimers & Gurevych, 2019) objective because of its suitability for sentence modeling. The sentence embedding is obtained by mean-pooling the last layer hidden states across all tokens. We refer to this LM as Translation LM. Note that this is a different LM than the GPT-style Planning LM discussed in the text so far. Using a single LM for both purposes could as well be possible and likely more efficient, but we leave such investigation to future works. While the set of actions in our environment is discrete and possible to exhaustively enumerate, sampling or projection can be employed in larger discrete or continuous action spaces.
Since Translation LM can guarantee the parsed action is allowed by the environment, we can tradeoff semantic soundness of an LM step by how likely it can be mapped to an admissible action in the environment. This can be achieved by a simple modification to the scheme that we use to choose the best sample from the LM output. Instead of only using mean token log probability as a ranking metric, we choose the sample with the highest score calculated as s = C + β · logprob, where C is the cosine similarity to the closest allowed action and β is a weighting parameter.
3.3 AUTOREGRESSIVE TRAJECTORY CORRECTION
Translating each step of the program after the entire program has been synthesized is analogous to open-loop planning and is subject to compounding errors. In practice, LLMs might output compounded instructions for a single step, even though it cannot be completed using one admissible action in the environment. To this end, we can instead interleave plan generation and action translation to allow for automatic trajectory correction. At each step, we first query Planning LM to generate k samples for a single action. Then we calculate score s for each sample using Translation LM and append the translated action to the unfinished text completion. This way all subsequent steps will be conditioned on admissible actions instead of free-form text output generated by Planning LM. Furthermore, we can use Translation LM to detect out-of-distribution actions, those outside the capabilities of a robot, and terminate a program early instead of mapping to a faulty action. This can be easily implemented by setting a threshold such that if Ctmax < at step t, the program is terminated early. We empirically show this leads to better executability while maintaining similar correctness of the generated action plans.
3.4 DYNAMIC EXAMPLE SELECTION FOR IMPROVED KNOWLEDGE EXTRACTION
So far in the text, we always give the same example in the prompt for all evaluated high-level tasks. However, consider the task of “ordering pizza”. Prompting LLMs with this task may give the assumption that the agent is initialized in front of a computer, and the LLMs may guide the agent to search for a pizza store and click “checkout my cart”. Although these are reasonable and feasible in the real world, such assumption cannot always be made as these interactions may not be supported in simulated environments like VirtualHome. In fact, the closest series of actions that human experts give may be “walking to a computer”, “switching on the computer”, and “typing the keyboard”. Without being finetuned on these data, LLMs would often fail at these tasks. To provide weak supervision at inference time, we propose to use Translation LM to select the most similar task from the demonstration set to be used as the example in the prompt. Specifically, we choose the task
whose high-level description matches closely the query task as measured by cosine similarity. This allows Planning LM to reason how to perform a similar task given a human-annotated example. In our evaluations, we make sure the demonstration set is not overlapping with our test queries.
Combining the various improvement discussed above, we refer to the final approach as Translated [LM], where [LM] is replaced by specific language model used such as GPT-3 and Codex.
4 RESULTS
In this section, we first show that language models can generate sensible action plans for many highlevel tasks, even without any additional training. Then we highlight its inadequacy when naively applied to embodied environments and demonstrate how this can be improved by again leveraging world knowledge learned by LLMs. Visualization of generated programs are shown in Fig 3.
Sampling from LMs Pre-trained LMs are sensitive to sampling parameters and the specific example given in the prompt. For all evaluated methods, we perform hyper-parameter search over various sampling parameters, and for methods using fixed prompt example, we report metrics averaged across three randomly chosen examples. To select the best run for each method, we rank the runs by LCS + executability, each normalized by human-expert scores3. Further details can be found in Appendix 8.3.
Model Choices We find empirically the combination of Codex-12B and Sentence-RoBERTa355M work well in our setting. Codex and GPT-3 are accessed using OpenAI API. The remaining models are accessed through open-source packages, Hugging Face Transformers (Wolf et al., 2019) and SentenceTransformers (Reimers & Gurevych, 2019), without additional modifications.
4.1 DO LLMS CONTAIN ACTIONABLE KNOWLEDGE FOR HIGH-LEVEL TASKS?
We first investigate whether LLMs can generate sensible action plans expressed in free-form language. We use the approach described in Section 3.1 to query pre-trained LLMs. To evaluate the correctness of generated action plans, we conduct human evaluations. For each model, we ask 10 human annotators to determine – by answering “Yes” or “No” – whether each task can be completed using provided action steps. To provide a reference of how humans might rate the action plans provided by other humans, we also ask annotators to rate the ground-truth action plans provided in the VirtualHome dataset for the same set of tasks. In contrast to the free-form text output by LLMs, the ground-truth action plans from VirtualHome are generated via a graphical programming interface that enforces strict syntax, although annotators were allowed to compose necessary actions.
We show the human evaluation results in Fig 1, where y-axis shows correctness averaged across all tasks and all annotators. Surprisingly, when LLMs are large enough and without imposed syntactic constraints, they can generate highly realistic action plans whose correctness – as deemed by human annotators – even surpasses human-labeled ground-truth. Yet another interesting finding is
3See footnote 2.
that Codex outperforms GPT-3 significantly under the same number of model parameters. One hypothesis could be that by fine-tuning on structured data (docstrings and code), Codex specializes at decomposing a high-level objective into a number of basic operations, even with those operations described in natural language. We also observe some level of correctness for smaller models such as GPT-2. However, inspection of its produced output indicates that it often generates significantly shorter plans by ignoring common-sense actions or by simply rephrasing the given task (e.g. the task “Go to sleep” produces only a single step “Go to bed”). These failure modes sometimes mislead human annotators to mark them correct as the annotators may ignore common-sense actions in their judgment as well, resulting in a higher correctness rate than the quality of the output shows.
4.2 HOW EXECUTABLE ARE THE LLM ACTION PLANS?
We analyze the executability of LLM plans by evaluating them in all 7 household scenes in VirtualHome. As shown in Table 1, we find action plans generated naively by LLMs are generally not very executable. Although smaller models seem to have higher executability, we find that the majority of these executable plans are produced by ignoring the queried task and repeating the given GT example of a different task. This is validated by the fact that smaller models have lower LCS than larger models despite having higher executability, showing that this failure mode is prevalent among smaller models. In contrast, larger models do not suffer severely from this failure mode. Yet as a result of being more expressive, their generated programs are substantially less executable.
4.3 CAN LLM ACTION PLANS BE MADE EXECUTABLE BY ACTION TRANSLATION?
In this section, we evaluate the effectiveness of our proposed procedure of action translation. We first create a bank of all allowed 47522 action steps in the environment, including all possible combinations of atomic actions and allowed arguments/objects. Then we use an off-the-shelf SentenceRoBERTa (Liu et al., 2019; Reimers & Gurevych, 2019) as Translation LM to create embeddings for actions and output text. For better computational efficiency, we pre-compute the embeddings for all allowed actions, leaving minor computation overhead for our procedure over the baseline methods at inference time. As shown in Table 1, executability of generated programs is significantly improved. Furthermore, we also observe improved LCS because the translated action steps precisely follow the program syntax and thus are more similar to ground-truth annotations. One sample output is shown in Fig 1 and a larger random subset of generated samples can be found in Appendix 8.7.
To validate their correctness, we again perform human studies using the same procedure in Sec 4.1. Results are shown in Table 1. We find that despite being more similar to GT, the programs are deemed less correct by humans. By examining the generated output, we observe two main sources of errors. First, we find Translation LM is poor at mapping compounded instructions to a succinct
admissible action. This is partly due to that Translation LM is trained on a much smaller dataset and contains much a smaller number of parameters, so we expect further improvement by using a larger pre-trained model for translation. The second source of error comes from imperfect expressivity of the environment; we find that for many tasks we evaluate, certain necessary actions or objects are not implemented. This is also reflected by out human evaluation results of the GT programs, as only half of the programs are considered complete by the human annotators.
5 ANALYSIS AND DISCUSSIONS
5.1 ABLATION OF DESIGN DECISIONS Methods Executability LCS
Translated Codex 12B 78.57% 24.72% - w/o Action Translation 31.49% 22.53% - w/o Dynamic Example 50.86% 22.84% - w/o Iterative 55.19% 24.43%
Table 2: Ablation of three proposed techniques.
We perform ablation studies to show the effectiveness and necessity of three components of our proposed procedure, each described Sec 3.2, 3.3, and 3.4. As shown in Table 2, leaving out any of the three components would all
lead to decreased performance in both executability and LCS. Notably, not doing action translation leads to the most significant executability drop, showing the importance of action translation in extracting executable action plans from LLMs.
5.2 CAN LLMS GENERATE ACTIONABLE PROGRAMS BY FOLLOWING DETAILED INSTRUCTIONS?
Prior works often focus on translating step-by-step instructions into executable programs. We evaluate LLMs under this setting using a prompt format shown in Appendix 8.2. Although this setting is easier as it does not require rich actionable knowledge, detailed instructions can help resolve much ambiguity of exactly how to perform a high-level task when multiple solutions are possible. Therefore, Translated Codex 12B achieves executability of 78.57% and LCS of 32.87%, where LCS sees a considerable bump from the setting without detailed instructions. Surprisingly, the LCS result is very close to that of a supervised LSTM (Hochreiter & Schmidhuber, 1997) baseline from VirtualHome trained on human-annotated data, which is at 34.00%. Note that since code to train the baseline and the specific train/test split is not publicly released, we only show results reported in Puig et al. (2018) as a reference. We also cannot compare executability as it is not reported.
5.3 IS ACTION TRANSLATION NECESSARY FOR ACTIONABLE KNOWLEDGE GROUNDING?
The investigations of this paper are two-fold: 1) Is actionable knowledge present in LLMs? 2) Can we ground this actionable knowledge in interactive environment? In this section, we focus our attention on the second question by conditioning on the assumption that first question is true. To do this, since successful execution of correct action plans directly measures grounding, we select only the correct plans generated by LLMs and measure how executable they are. We deem an action plan to be correct if 70% or more human annotators decide it is correct.
As shown by Table 3, when an LM is not large enough (e.g. GPT-2), not only it contains little actionable knowledge, but this knowledge cannot be grounded at all. GPT-3 and Codex, on the other hand, can generate highly correct action plans in free-form language. However, they do not have the capability to ground their actionable knowledge in interactive environments. What’s more interesting, by comparing GPT-3 of both 12B parameters and 175B parameters, ratio of executable plans does not improve with the parameter count. This shows that simply training larger models does not necessarily lead to better knowledge grounding. In the meantime, action translation offers a promising way towards grounding actionable knowledge by producing highly executable plans. However, we again note that it comes at a trade-off of producing less correct plans as compared to its vanilla counterpart, and we hope to see future endeavors for bridging the gap.
6 RELATED WORKS
Large-scale natural language modeling has witnessed rapid advances since the inception of the Transformer architecture (Vaswani et al., 2017). It has been shown by recent works that large language models (LLMs) pre-trained on large unstructured text corpus not only can perform strongly on various down-stream NLP tasks (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) but also can internalize an implicit knowledge base containing rich information about the world (Petroni et al., 2019; Jiang et al., 2020; Davison et al., 2019; Talmor et al., 2020; Roberts et al., 2020). Furthermore, the learned representations can be used to model relations of entities (Li et al., 2021), retrieve matching visual features (Ilharco et al., 2020), and even as valuable priors when applied to diverse tasks from different modalities (Lu et al., 2021; Tsimpoukelli et al., 2021). Compared to prior works in knowledge extraction that extract single-step factual answers memorized by the models (e.g. “Dante was born in [PLACE]”), we aim to extract sequential action plans to complete an open-ended human activity (e.g. “make breakfast”). We further require these plans to only contain allowed actions and satisfy the pre/post-conditions of actions in order to be executed by an embodied agent.
At the same time, there has also been growing interest and development in grounding language in embodied environment. A series of prior works have investigated the possibility of parsing language instructions into formal logic to resolve various linguistic ambiguities for embodied agents (Artzi & Zettlemoyer, 2013; Misra et al., 2015; Tenorth et al., 2010). However, they often scale poorly to complex tasks and environments. Recently, more research efforts have been put into creating better and more realistic environments with the goal to further advances in this area (Puig et al., 2018; Shridhar et al., 2020a;b; Kolve et al., 2017; Savva et al., 2019). At the same time, by leveraging the better representation power of neural architectures, a number of works have looked into creating instruction-following agents that can perform manipulation (Lynch & Sermanet, 2020), navigation (Majumdar et al., 2020), or both (Suglia et al., 2021; Hill et al., 2020).
Notably, most of these prior works do not leverage full-blown pre-trained LLMs (Suglia et al., 2021) or do not scale to complex human activities (Hill et al., 2020; Lynch & Sermanet, 2020). Perhaps more importantly, few works have evaluated LLMs in an embodiment setting that realizes the full potential of the world knowledge these models contain: the tasks evaluated are often “pick”, “grab”, “open”, and etc, which do not resemble the highly diverse activities that humans perform in daily lives. The development of VirtualHome environment Puig et al. (2018) enables such possibility. However, relevant works (Puig et al., 2020; Liao et al., 2019) rely on human-annotated data and perform supervised training from scratch. Due to the lack of rich world knowledge, these models can only generate action plans given step-by-step instructions of how to act or video demonstrations. In this work, we take a step further by conditioning only on the high-level descriptions and by extracting executable action plans from LLMs without any additional training.
7 LIMITATIONS AND CONCLUSION
There are several notable limitations of this work. First, although our approach presents a viable way to ground world knowledge in embodied environments, it is still a trade-off rather than one best solution since we observe considerable drop in correctness. Second, we focus on high-level to midlevel grounding, assuming there is a controller that can execute mid-level tasks (such as “grab cup”). Our work does not investigate usefulness of LLMs for low-level sensorimotor behavior grounding. The third limitation is that we do not incorporate observation context or feedback into our models. To some extent, we approach LLMs in the same way as how VirtualHome asks human annotators to give action plans for a given huamn activity by imagination, in which case the human-generated action plans also do not incorporate observation context. However, we do see incorporating observation context for complex activities as an exciting future direction.
8 APPENDIX
8.1 EXAMPLE PROGRAM IN VIRTUALHOME
8.2 EXAMPLE PROMPT CONTAINING STEP-BY-STEP INSTRUCTIONS
8.3 HYPERPARAMETER SEARCH
For each evaluated method, we perform grid search over the following hyperparameters:
Name Description Search Values epsilon ( ) OOD cutoff threshold used in iterative action
translation {0, 0.4, 0.8}
temperature sampling parameter adjusting relative probabilities across tokens {0.1, 0.3, 0.6}
k number of samples generated when querying LMs each time {1, 10}
frequence penalty OpenAI API specific; penalize new tokens based on their existing frequency in the text so far {0.1, 0.3, 0.6, 0.9}
presence penalty OpenAI API specific; penalize new tokens based on whether they appear in the text so far {0.3, 0.5, 0.8}
repetition penalty Hugging Face Transformers specific; penalize new tokens based on whether they are repeating existing text {1.0, 1.2, 1.5, 1.8}
For methods that use fixed example across evaluated tasks, we search over the following three randomly chosen examples:
Example 1 Example 2 Example 3 Task: Use computer Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Relax on sofa Step 1: Walk to home office Step 2: Walk to couch Step 3: Find couch Step 4: Sit on couch Step 5: Find pillow Step 6: Lie on couch Task: Read book Step 1: Walk to home office Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find chair Step 6: Sit on chair Step 7: Read novel
8.4 ANALYSIS OF PROGRAM LENGTH
Shorter programs have a natural advantage of being more executable. Consider a task “wash hands” and a corresponding program that only commands an agent to “go to bathroom” without additional steps. The program is obviously incorrect yet trivially executable. To validate our approach does not simply generate very short program, we calculate the average program length across the 88 evaluated tasks. Results are shown in Table 6. In addition to the failure mode discussed in Section 4.2 that leads to incorrect yet executable programs, smaller LMs such as GPT-2 also generate programs significantly shorter than larger models, making them more executable. In contrast, larger models like Codex-12B generate more expressive program of high correctness, but they often suffer from executability. We show action translation can lead to benefits of both worlds, generating programs that are highly executable while maintaining similar expressiveness in terms of program length.
8.5 DETAILS OF HUMAN EVALUATIONS
Human evaluations are conducted on Amazon Mechanical Turk. For each method, we generate action plans for all 88 high-level tasks. To account for expressivity of the VirtualHome environment (Puig et al., 2018), we further include ground-truth action plans from the VirtualHome dataset in our human evaluations. The human evaluations are conducted in the form of questionnaires containing all action plans with unknown corresponding methods. The questionnaire contains the following instructions at the top:
For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps. In other words, can the task be decomposed into these steps? Note that simply re-stating the task does not mean completing it.
Human annotators are required to answer all the questions in the questionnaire, where each question is an action plan generated by a method unknown to the annotator. The order of the questions is randomly permuted before presented to each annotator. For each question, the annotators need to answer either “Yes” or “No” indicating if they believe the action plan completes the task. For each method, we report correctness percentage averaged across 10 participated human annotators.
8.6 DETAILS OF VIRTUALHOME TASKS
VirtualHome ActivityPrograms is a knowledge base collected on Mechanical Turk by Puig et al. (2018). The knowledge base features real household activities of humans and correspondingly sequences of basic actions for robots to perform these activities. Each entry contains a high-level task name (i.e. “Watch TV”), a mid-level detailed description (i.e. “Sit on my couch directly opposite my TV, switch on my TV with the remote control and watch”), and a sequence of allowed actions (a.k.a. a program) to perform the task in VirtualHome. There are 292 distinct high-level tasks and a total of 2821 different entries in the knowledge base, where each high-level task may correspond one or several programs. The knowledge base covers a diverse set of real human activities and a wide range of different actions and objects. Puig et al. (2018) also conduct additional human evaluation on 100 randomly sampled programs and show that 64% of them are complete (i.e. contain all necessary steps). Readers are encouraged to refer to Puig et al. (2018) for more comprehensive analysis of the dataset.
Train/Test Split From all 292 distinct high-level tasks from the ActivityPrograms knowledge base, we randomly sample 88 high-level tasks for evaluation. The remaining 204 tasks are used as demonstration set from which we are allowed to select as example(s) for prompting language models, or in the case of supervised fine-tuning, they are used to fine-tune pre-trained language models.
8.7 RANDOM SAMPLES OF GT, VANILLA CODEX, AND TRANSLATED CODEX
Annotated Ground-Truth Codex Translated Codex (Ours) Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Turn to bed Step 5: Look at bed Step 6: Sit on bed Step 7: Sleep Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find pillow Step 6: Sleep on pillow Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Lie on bed Step 4: Sleep
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer
Task: Listen to music Step 1: Walk to bedroom Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Pull chair Step 9: Sit on chair Step 10: Find headset Step 11: Grab headset Step 12: Put on headset Step 13: Switch on computer Step 14: Find keyboard Step 15: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to computer Step 3: Sit on chair Step 4: Switch on computer Step 5: Find music stand
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to tooth paste Step 3: Find tooth paste Step 4: Grab tooth paste Step 5: Find toothbrush Step 6: Grab toothbrush
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to sink Step 3: Find toothbrush Step 4: Pick up toothbrush Step 5: Put toothbrush in mouth Step 6: Move brush around mouth for two minutes Step 7: Spit out toothpaste and brush into sink Step 8: Turn on water in sink and rinse brush for one minute Step 9: Turn off water in sink and return brush to cupboard Task: Brush teeth Step 1: Walk to bathroom Step 2: Open door Step 3: Walk to sink Step 4: Put pot on sink Step 5: Put brush on toothbrush Step 6: Turn to toothpaste Step 7: Put toothpaste on toothbrush Step 8: Put teeth on toothbrush
Task: Dry soap bottles Step 1: Find towel Step 2: Find soap Step 3: Grab soap Step 4: Wipe soap Step 5: Put back soap Step 6: Find shampoo Step 7: Grab shampoo Step 8: Wipe shampoo Step 9: Put back shampoo Step 10: Find shampoo Step 11: Grab shampoo Step 12: Wipe shampoo Step 13: Put back shampoo
Task: Dry soap bottles Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on water tap Step 5: Wait for water to be hot enough Step 6: Fill container with hot water Step 7: Find soap bottles Step 8: Pour soap into container with hot water Step 9: Wait for soap to dissolve in water Step 10: Turn off tap and empty container into sink Task: Dry soap bottles Step 1: Walk to bathroom Step 2: Walk to shower Step 3: Find soap
Task: Vacuum Step 1: Walk to home office Step 2: Walk to computer Step 3: Find vacuum cleaner Step 4: Switch on vacuum cleaner Step 5: Switch off vacuum cleaner Task: Vacuum Step 1: Walk to hallway Step 2: Walk to closet Step 3: Find closet Step 4: Open closet Step 5: Find vacuum cleaner Step 6: Turn on vacuum cleaner Step 7: Walk to living room Step 8: Vacuum floor in living room Task: Vacuum Step 1: Walk to home office Step 2: Walk to dresser Step 3: Open dresser Step 4: Find vacuum cleaner Step 5: Grab vacuum cleaner Step 6: Pull vacuum cleaner Step 7: Plug in vacuum cleaner Step 8: Switch on vacuum cleaner Step 9: Pull vacuum cleaner Step 10: Push vacuum cleaner
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to bookshelf Step 3: Turn to bookshelf Step 4: Look at bookshelf Step 5: Turn to bookshelf Step 6: Look at bookshelf Step 7: Turn to bookshelf Step 8: Look at bookshelf Step 9: Turn to bookshelf Step 10: Look at bookshelf Step 11: Turn to bookshelf Step 12: Look at bookshelf Step 13: Turn to bookshelf Step 14: Look at bookshelf Step 15: Find novel Step 16: Grab novel
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find dictionary Step 6: Open dictionary Step 7: Look at dictionary
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to book Step 3: Find textbook
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Switch off computer Step 7: Find brush Step 8: Turn to brush Step 9: Look at brush Step 10: Grab brush Step 11: Wipe computer Step 12: Drop brush Step 13: Find napkin Step 14: Grab napkin Step 15: Wash napkin Step 16: Wipe computer
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Wash monitor Step 1: Walk to check Step 2: Walk to desk
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Open newspaper Step 8: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Open newspaper Step 7: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Read newspaper
Task: Apply lotion Step 1: Find bed Step 2: Turn to bed Step 3: Point at bed Step 4: Wash bed Step 5: Turn to bed Step 6: Look at bed Step 7: Push bed Step 8: Sit on bed
Task: Apply lotion Step 1: Walk to bathroom Step 2: Walk to cabinet Step 3: Open cabinet Step 4: Find lotion Step 5: Apply lotion on hands Step 6: Rub hands together for 10 seconds Step 7: Apply lotion on face Task: Apply lotion Step 1: Walk to bathroom Step 2: Turn to lotionbottle Step 3: Grab lotionbottle
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find desk Step 6: Open desk Step 7: Find check Step 8: Grab check Step 9: Put check on desk Step 10: Find pencil Step 11: Grab pencil Step 12: Close desk
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Draw picture Step 1: Walk to home office Step 2: Walk to wall Step 3: Walk to drawing Step 4: Find drawing Step 5: Turn to drawing Step 6: Look at drawing Step 7: Touch drawing Step 8: Pull drawing Step 9: Push drawing
Task: Prepare pot of boiling water Step 1: Walk to dining room Step 2: Walk to pot Step 3: Find pot Step 4: Grab pot Step 5: Walk to sink Step 6: Find faucet Step 7: Switch on faucet Step 8: Put pot on sink Step 9: Grab pot Step 10: Switch off faucet Step 11: Grab pot Step 12: Walk to oven Step 13: Put pot on oven Step 14: Switch on oven Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on tap Step 5: Fill kettle with water Step 6: Put kettle on stove Step 7: Turn on stove Step 8: Wait for water to boil Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to cabinet Step 3: Find pot Step 4: Grab pot Step 5: Walk to stove Step 6: Find stove Step 7: Switch on stove
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Look at mirror
Task: Look at mirror Step 1: Walk to bedroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Wipe down sink Step 1: Walk to dining room Step 2: Walk to rag Step 3: Find rag Step 4: Grab rag Step 5: Walk to sink Step 6: Wipe sink Step 7: Put back rag
Task: Wipe down sink Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Wipe down sink Step 5: Return to kitchen
Task: Wipe down sink Step 1: Walk to bathroom Step 2: Walk to cleaning solution Step 3: Find cleaning solution Step 4: Grab cleaning solution Step 5: Find sink Step 6: Pour cleaning solution into glasses Step 7: Find faucet Step 8: Switch on faucet Step 9: Scrub sink Step 10: Wash sink
Task: Complete surveys on amazon turk Step 1: Walk to chair Step 2: Sit on chair Step 3: Find computer Step 4: Switch on computer Step 5: Turn to computer Step 6: Look at computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer Step 11: Find check Step 12: Grab check Step 13: Read check Step 14: Put back mouse Step 15: Find keyboard Step 16: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer
Task: Write book Step 1: Walk to home office Step 2: Walk to laptop Step 3: Find laptop Step 4: Grab laptop Step 5: Find electrical outlet Step 6: Plug in laptop Step 7: Walk to love seat Step 8: Sit on love seat Step 9: Switch on laptop Step 10: Find keyboard Step 11: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to light Step 3: Find light Step 4: Switch on light Step 5: Find pen Step 6: Grab pen Step 7: Find chair Step 8: Sit on chair Step 9: Turn to novel
Task: Paint ceiling Step 1: Find drawing Step 2: Turn to drawing Step 3: Point at drawing Step 4: Watch drawing Step 5: Touch drawing Step 6: Walk to bedroom
Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find paint can Step 6: Open paint can Step 7: Pour paint into paint can Step 8: Close paint can Step 9: Find roller Step 10: Roll paint onto ceiling Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to ceiling Step 3: Walk to painting Step 4: Find painting Step 5: Turn to painting Step 6: Look at painting Step 7: Touch painting Step 8: Pull painting Step 9: Push painting
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Open dresser Step 4: Find dress Step 5: Grab dress Step 6: Put on dress Step 7: Close dresser
Task: Change clothes Step 1: Walk to closet Step 2: Open closet Step 3: Find clothes Step 4: Put on clothes Step 5: Close closet
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find dresser Step 4: Open dresser Step 5: Find pants Step 6: Grab pants Step 7: Put on pants Step 8: Find shirt Step 9: Grab shirt Step 10: Put on shirt Step 11: Close dresser
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find sheets Step 4: Grab sheets Step 5: Pull sheets
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find sheet Step 6: Pick up sheet Step 7: Find pillow Step 8: Pick up pillow Step 9: Find blanket Step 10: Pick up blanket
Task: Make bed Step 1: Stand up Step 2: Walk to bedroom Step 3: Walk to bed Step 4: Turn to bed Step 5: Put sheets on bed
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find bed Step 6: Sit on bed Step 7: Find bookmark Step 8: Grab bookmark Step 9: Read novel Step 10: Put bookmark on novel Step 11: Sleep Task: Read yourself to sleep Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to book Step 3: Sit on chair Step 4: Read check Step 5: Sleep
Task: Hang keys Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find purse Step 4: Grab purse Step 5: Open purse Step 6: Find keys Step 7: Grab keys Step 8: Find bowl Step 9: Put keys on bowl Step 10: Close purse Step 11: Put back purse
Task: Hang keys Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Hang keys Step 1: Walk to home office Step 2: Walk to table Step 3: Find keys Step 4: Grab keys Step 5: Walk to wall Step 6: Find hanger Step 7: Put keys on hanger
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to freezer Step 3: Open freezer Step 4: Find cheese Step 5: Grab cheese Step 6: Eat cheese
Task: Eat cheese Step 1: Walk to fridge Step 2: Open fridge Step 3: Find cheese Step 4: Pick up cheese Step 5: Walk to table Step 6: Place cheese on table Step 7: Close fridge Step 8: Walk to chair Step 9: Sit on chair Step 10: Eat cheese
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to table Step 3: Find table Step 4: Turn to table Step 5: Find chair Step 6: Sit on chair Step 7: Find food Step 8: Grab food Step 9: Find plate Step 10: Put food on plate | 1. What is the main contribution of the paper regarding using pretrained language models for planning actions?
2. What are the concerns and questions raised by the reviewer regarding the approach proposed in the paper?
3. How does the reviewer assess the suitability of the VirtualHome task for planning, and what limitations do they identify?
4. Why does the reviewer suggest evaluating correctness and executability separately, and how would this impact the success rate of the proposed method?
5. What related work did the paper miss, according to the reviewer, and how does it compare to other knowledge mining approaches? | Summary Of The Paper
Review | Summary Of The Paper
This paper proposes using pretrained language models for planning actions. They achieve this by providing an example task sequence as a prompt to generate an action description for each step autoregressively. At each step, they translate the action description to the environment action by computing the similarity between the description and the available environment actions. The results show that there is a trade-off between executability and correctness.
Review
This paper shows that the pretrained language model contains knowledge that is useful for action planning. While this is a promising direction to use pretrained language model, I have several concerns and questions for this paper.
Grounding & planning vs. knowledge extraction. When we do grounding or planning, the agent’s actions may change given the environment they are put in. The proposed approach is closer to procedure knowledge extraction using language models rather than planning or grounding. While the available actions in VirtualHome might be different from other virtual environments (therefore, this paper proposed a translation step), the generated action plans don’t change if we change the environment from one home to another. So, the generated plans are not really grounded in the domain or environment the agent is planning for. The authors may consider repositioning this paper to knowledge extraction papers that fit the contribution better.
Choice of VirutalHome task for planning. The tasks are mainly around the daily home tasks like setting up a dinner table. To perform such tasks, most of the time, the agent may just apply predefined procedure knowledge. However, procedure knowledge and the translation approach proposed by this paper cannot plan for the task like “stack two plates on the right of a cup” because the language model doesn’t ground its knowledge to the environment so the agent cannot tell the difference between putting down on the left or right of the cup. This is the main limitation to using the proposed approach as a planner.
Evaluating correctness and executability separately. The goal of task planning is to find the plan that is executable and correct, i.e. success rate. However, most of the tables in the evaluation only show one or another. Only Table 4 shows “# of C and E”. If we compute the success rate for Translated Codex 12B, it is 15/88=0.17 and would be lower for other models. This reflects the performance of the proposed method better and the overall success rate is quite low.
Missing related work. This paper missed some related work on knowledge extraction/mining using pertained language models. Just to list a few below. The authors will need to discuss how the proposed approach is different from other knowledge mining approaches compared to prior work.
Petroni, F., Rocktäschel, T., Lewis, P., Bakhtin, A., Wu, Y., Miller, A.H. and Riedel, S. “Language models as knowledge bases?” In EMNLP 2019.
Davison, Joe, Joshua Feldman, and Alexander M. Rush. "Commonsense knowledge mining from pretrained models." In EMNLP 2019.
Jiang, Z., Xu, F.F., Araki, J. and Neubig, G. “How can we know what language models know?”. Transactions of the Association for Computational Linguistics, 8, pp.423-438. |
ICLR | Title
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Abstract
Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (i.e. “make breakfast”), to a chosen set of actionable steps (i.e. “open fridge”). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models1.
1 INTRODUCTION
Large language models (LLMs) have made impressive advances in language generation and understanding in recent years (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). See Bommasani et al. (2021) for a recent summary of their capabilities and impacts. Being trained on large corpora of human-produced language, these models are thought to contain a lot of information about the world (Roberts et al., 2020; Li et al., 2021; BIG-bench collaboration, 2021) - albeit in linguistic form.
We ask whether we can use such knowledge contained in LLMs not just for linguistic tasks, but to make goal-driven decisions to that can be enacted in interactive, embodied environments. But we are not simply interested in whether we can train models on a dataset of demonstrations collected for some specific environment – we are instead interested in whether LLMs already contain information necessary to accomplish goals without any additional training.
More specifically, we ask whether world knowledge about how to perform high-level tasks (such as “make breakfast”) can be expanded to a series of groundable actions (such as “open fridge”, “grab milk”, “close fridge”, etc) that can be executed in the environment. For our investigation, we use recently proposed VirtualHome environment (Puig et al., 2018). It can simulate a large variety of realistic human activities in a household environment and supports ability to perform them via embodied actions defined with a verb-object syntax. However, due to open-ended nature of the tasks, it is difficult to autonomously evaluate their success. We rely on human evaluation (conducted on Mechanical Turk) to decide whether sequences of actions meaningfully accomplish posed tasks.
We find that large GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) models, when prompted with a single fixed example of a task description and its associated sequence of actions, can produce very plausible action plans for the task we’re interested in. Such completions reflect the information already stored in the model – no model fine-tuning is involved. Additionally, we only observe this
1Results and videos at https://sites.google.com/view/language-model-as-planner
effect in the larger models. Unfortunately, despite their semantic correctness, the produced action plans are often not executable in the environment. Produced actions may not map precisely to admissible actions, or may contain various linguistic ambiguities.
We propose several tools to improve executability of the model’s outputs. First, we enumerate all admissible action phrases and map the model’s output action to the most semantically-similar admissible action (we use similarity measure between sentence embeddings produced by a RoBERTa model Liu et al. (2019) in this work, but other choices are possible). Second, we use the model to autoregressively generate actions in a plan by conditioning past actions that have been made admissible via the technique above. Such on the fly correction can keep generation anchored to admissible actions. Third, we provide weak supervision to the model by prompting the model with a known task example similar to the query task. This is somewhat reminiscent of prompt tuning approaches, but does not require access to gradients or internals of the model.
Using above tools to bias model generation, we find that we improve executability of instructions from 18% to 79% (see Figure 1) without any invasive modifications to model parameters or any extra gradient or internal information beyond what is returned from the model’s forward pass. This is advantageous because it does not require any modifications to model training procedure and can fit within existing model serving pipelines. However, we do find there to be a significant drop in correctness of the instruction sequences generated with above tools (as judged by humans), indicating a promising step, but requiring more research on the topic.
To summarize, our paper’s contributions are as follows:
• We show that without any training, large language models can be prompted to generate plausible goal-driven action plans, but such plans are frequently not executable in interactive environments.
• We propose several tools to improve executability of the model generation without invasive probing or modifications to the model.
• We conduct a human evaluation of multiple techniques and models and report on the tradeoffs between executabiltiy and semantic correctness.
2 EVALUATION FRAMEWORK
Simulating open-ended tasks that resemble naturalistic human activities requires an environment to support a rich set of diverse interactions, rendering most existing embodied environments unsuitable for our investigation. One exception is VirtualHome (Puig et al., 2018), which models human activities in a typical household. Therefore, we only provide evaluation in this environment. To further measure correctness given open-ended tasks, we conduct a human evaluation. We note that since no further training is involved throughout our investigations, the observations and findings presented in this paper should also translate to similar embodied environments.
2.1 EVALUATED ENVIRONMENT: VIRTUALHOME
Preliminaries In VirtualHome, activities are expressed as programs. Each program consists of a sequence of steps, where each step is written as: [action] 〈arg1〉(id1) ... 〈argn〉(idn). Each action refers to atomic actions such as “walk”, “open”, and “put”. A total of 45 atomic actions are supported by VirtualHome. Different actions take in different numbers of arg necessary for specifying an interaction. Associated with each arg is a unique id specifying the corresponding node in the environment graph, in case of multiple instances of the same object class are present in the graph. For the sake of simplicity, we omit the id in the remaining discussions of this paper and allow automatic assignment by the environment. An example program is shown in Appendix 4.
Evaluated Tasks We use the knowledge base collected by VirtualHome for evaluation. The knowledge base contains household activities crowd-sourced from Amazon Mechanical Turk (MTurk). The MTurk workers were asked to provide natural language descriptions of daily household activities and all actionable steps necessary for completing the activities. The descriptions are both given as high-level task descriptions and step-by-step instructions. We omit the use of stepby-step instructions in this work as we desire direct extraction of executable programs from only task descriptions. For evaluations, we randomly sample a subset of 88 high-level tasks, each having one or more annotated ground-truth programs. The remaining 204 tasks are used as demonstration set, from which we are allowed to select as example(s) for prompting language models. Note that no training or fine-tuning is performed using these tasks and their annotations. More details of the evaluated tasks can be found in Appendix 8.6.
2.2 METRICS
A program that commands the agent to wander around in a household environment is highly executable but may not complete the desired task. On the other hand, a program composed of step instructions from knowledge bases can likely complete the task but cannot be executed. The reason is that free-form instructions can be ambiguous and may lack necessary common-sense actions. To this end, we consider two axes for evaluation: executability and correctness.
Executability Executability measures whether an action plan can be correctly parsed and satisfies the common-sense constraints of the environment. To be correctly parsed, an action plan must be syntatically correct and contain only allowed actions and recognizable objects. To satisfy the common-sense constraints, each action step must not violate the set of its pre-conditions (e.g. the agent cannot grab milk from the fridge before opening it) and post-conditions (e.g. the state of the fridge changes from “closed” to “open” after the agent opens it). We report the average executability across all 88 tasks and across all 7 VirtualHome scenes.
Correctness Unlike most embodied environments where the completion of a task can be easily judged, the ambiguous and multimodal nature of natural language task specification makes it impractical to obtain a gold-standard measurement of correctness. One approach could be measuring similarity of the final environment state produced by executing predicted and ground-truth programs, but VirtualHome initializes an environment differently based on the to-be-executed program, making comparisons difficult if measured in such way. Therefore, we conduct human evaluation for the highlighted methods. More details of the human evaluations can be found in Appendix 8.5. For the remaining methods and ablation studies, we rely on a match-based metric that measures how similar a generated program is to human annotations. Specifically, we follow Puig et al. (2018) and calculate the longest common subsequence (LCS) between two programs, normalized by the maximum length of the two. In the presence of multiple ground-truth programs for a single task, we take the maximum LCS across the ground-truth programs. However, we note that the majority of the tasks only have one ground-truth annotation, but there are often many plausible ways to complete a certain task, making this metric imperfect at evaluation program correctness2. Although correlation between the two is shown by Puig et al. (2018), we consider it only as a proxy metric in replacement of unscalable human evaluation.
2Although LCS has a mathematical range of [0, 1], we measure the LCS between different ground-truth programs for the same task and find an empirical maximum of 0.489.
Task: Shave Step 1: Grab razor Step 2: Switch on razor Step 3: Put razor on face
Task: Apply lotion
Pre-Trained Causal LLM Frozen
Pre-Trained Masked LLM
Frozen
Step 1: Squeeze out a glob of lotion Step 1: Pour lotion into right hand
Step 1: Squeeze out a glob of lotion Task: ShaveStep 1: Grab razor Step 2: Wash razor Step 3: Switch on razor
Task: Apply lotion Step 1: Pour lotion into right hand Step 2:
Pre-Trained Causal LLM Frozen
Zero-Shot Planning via Causal LLM Translation to Admissible Action Step-By-Step Autoregressive Generation
Prompt Prompt
Figure 2: We investigate the possibility of extracting actionable knowledge from pre-trained language models without any additional training. We first show surprising finding that large language models (LLMs) can decompose high-level tasks into sensible low-level action plans (left). To make the action plans executable, we propose to translate each step into admissible action via another LLM (middle). The translated action is appended to the original prompt used for generating the remaining steps (right).
3 METHOD
In this section, we investigate the possibility of extracting actionable knowledge from pre-trained language models without further training. We first give an overview of the common approach to query large language models (LLMs) and how it may be used for embodied agents. Then we describe an inference-time procedure that addresses several deficiencies of the LLM baseline and offers better executability in embodied environments. We break down the proposed procedure into three individual components, each discussed in Sec 3.2, 3.3, and 3.4.
Since LMs excel at dealing with natural language text instead of the specific format required by VirtualHome as described in Section 2.1, we only expose natural language text to LMs. To do this, we define a mapping for each atomic action that parses a natural language phrase to the required format. For instance, “Walk to living room” is converted to “[Walk] 〈living room〉(1)”. When an LM output cannot be parsed to any of the allowed action, the entire program is considered syntactically incorrect and thus not executable.
3.1 PROMPTING
Previous works have shown that large language models pre-trained on a colossal amount of data contain useful world knowledge that can be probed to perform various down-stream tasks (Radford et al., 2019; Brown et al., 2020). Notably, autoregressive LLMs can even perform in-context learning, an approach to solve tasks using only contextual information without gradient updates (Brown et al., 2020). Contextual information is given as part of the input prompt and LMs are asked to complete the remaining text. It often consists of natural language instructions and/or a number of examples containing the desired input/output pairs.
We adopt the same approach to query LLMs to generate action plans for high-level tasks. Specifically, we prepend one example task description sentence and its annotated action plan from the demonstration set to the query task description, as shown in Fig 2. To obtain text completion results, we sample from autoregressive LLM using temperature sampling and nucleus sampling (Holtzman et al., 2019). We refer to this LM as Planning LM and the approach using this LM for plan generation as Vanilla [LM], where [LM] is replaced by specific language model such as GPT-3 or Codex.
To further improve the quality of the generated output, we follow Chen et al. (2021) that uses LMs for program synthesis to sample multiple output for each task. However, unlike prior works in program synthesis that choose the sample with highest unit test pass rate, we only consider the setting where one sample is allowed to be evaluated for each task. This is because repetitive trialand-errors can be dangerous in the real world, and executing many action plans is equivalent to probing the environment for privileged information, which is often considered not viable.
3.2 ROBUST PARSING BY SEMANTIC TRANSLATION
One issue arises when naively following the above approach to generate action plans for high-level tasks: the action plan is often not executable because LMs are allowed to generate free-form text.
Therefore, most of the time the output cannot be mapped to one unambiguous actionable step. And many reasons can cause such failures: 1) the output does not follow pre-defined mappings of any atomic action (i.e. “I first walk to the bedroom” does not follow “Walk to 〈PLACE〉”), 2) the output may refer to atomic action and objects using words unrecognizable by the environment (i.e. “Clean the dirty dishes in the sink” where “clean” and “dirty dishes in the sink” cannot be mapped to precise action and object), 3) the output contains lexical ambiguous words (i.e. “Open TV” should instead be “Switch on TV”), or 4) the output may use disallowed action (i.e. “Microwave the cup”).
Instead of developing a set of rules to transform the free-form text into admissible action steps, we propose to again leverage world knowledge learned by large language models to semantically translate the action. For each step in the action plan â, we aim to find the most similar admissible environment action ae as measured by cosine similarity:
argmax ae f(â) · f(ae) ‖f(â)‖‖f(ae)‖ where f is an embedding function.
To embed the output text and environment actions, we use a BERT-style LM (Devlin et al., 2018; Liu et al., 2019) trained with Sentence-BERT (Reimers & Gurevych, 2019) objective because of its suitability for sentence modeling. The sentence embedding is obtained by mean-pooling the last layer hidden states across all tokens. We refer to this LM as Translation LM. Note that this is a different LM than the GPT-style Planning LM discussed in the text so far. Using a single LM for both purposes could as well be possible and likely more efficient, but we leave such investigation to future works. While the set of actions in our environment is discrete and possible to exhaustively enumerate, sampling or projection can be employed in larger discrete or continuous action spaces.
Since Translation LM can guarantee the parsed action is allowed by the environment, we can tradeoff semantic soundness of an LM step by how likely it can be mapped to an admissible action in the environment. This can be achieved by a simple modification to the scheme that we use to choose the best sample from the LM output. Instead of only using mean token log probability as a ranking metric, we choose the sample with the highest score calculated as s = C + β · logprob, where C is the cosine similarity to the closest allowed action and β is a weighting parameter.
3.3 AUTOREGRESSIVE TRAJECTORY CORRECTION
Translating each step of the program after the entire program has been synthesized is analogous to open-loop planning and is subject to compounding errors. In practice, LLMs might output compounded instructions for a single step, even though it cannot be completed using one admissible action in the environment. To this end, we can instead interleave plan generation and action translation to allow for automatic trajectory correction. At each step, we first query Planning LM to generate k samples for a single action. Then we calculate score s for each sample using Translation LM and append the translated action to the unfinished text completion. This way all subsequent steps will be conditioned on admissible actions instead of free-form text output generated by Planning LM. Furthermore, we can use Translation LM to detect out-of-distribution actions, those outside the capabilities of a robot, and terminate a program early instead of mapping to a faulty action. This can be easily implemented by setting a threshold such that if Ctmax < at step t, the program is terminated early. We empirically show this leads to better executability while maintaining similar correctness of the generated action plans.
3.4 DYNAMIC EXAMPLE SELECTION FOR IMPROVED KNOWLEDGE EXTRACTION
So far in the text, we always give the same example in the prompt for all evaluated high-level tasks. However, consider the task of “ordering pizza”. Prompting LLMs with this task may give the assumption that the agent is initialized in front of a computer, and the LLMs may guide the agent to search for a pizza store and click “checkout my cart”. Although these are reasonable and feasible in the real world, such assumption cannot always be made as these interactions may not be supported in simulated environments like VirtualHome. In fact, the closest series of actions that human experts give may be “walking to a computer”, “switching on the computer”, and “typing the keyboard”. Without being finetuned on these data, LLMs would often fail at these tasks. To provide weak supervision at inference time, we propose to use Translation LM to select the most similar task from the demonstration set to be used as the example in the prompt. Specifically, we choose the task
whose high-level description matches closely the query task as measured by cosine similarity. This allows Planning LM to reason how to perform a similar task given a human-annotated example. In our evaluations, we make sure the demonstration set is not overlapping with our test queries.
Combining the various improvement discussed above, we refer to the final approach as Translated [LM], where [LM] is replaced by specific language model used such as GPT-3 and Codex.
4 RESULTS
In this section, we first show that language models can generate sensible action plans for many highlevel tasks, even without any additional training. Then we highlight its inadequacy when naively applied to embodied environments and demonstrate how this can be improved by again leveraging world knowledge learned by LLMs. Visualization of generated programs are shown in Fig 3.
Sampling from LMs Pre-trained LMs are sensitive to sampling parameters and the specific example given in the prompt. For all evaluated methods, we perform hyper-parameter search over various sampling parameters, and for methods using fixed prompt example, we report metrics averaged across three randomly chosen examples. To select the best run for each method, we rank the runs by LCS + executability, each normalized by human-expert scores3. Further details can be found in Appendix 8.3.
Model Choices We find empirically the combination of Codex-12B and Sentence-RoBERTa355M work well in our setting. Codex and GPT-3 are accessed using OpenAI API. The remaining models are accessed through open-source packages, Hugging Face Transformers (Wolf et al., 2019) and SentenceTransformers (Reimers & Gurevych, 2019), without additional modifications.
4.1 DO LLMS CONTAIN ACTIONABLE KNOWLEDGE FOR HIGH-LEVEL TASKS?
We first investigate whether LLMs can generate sensible action plans expressed in free-form language. We use the approach described in Section 3.1 to query pre-trained LLMs. To evaluate the correctness of generated action plans, we conduct human evaluations. For each model, we ask 10 human annotators to determine – by answering “Yes” or “No” – whether each task can be completed using provided action steps. To provide a reference of how humans might rate the action plans provided by other humans, we also ask annotators to rate the ground-truth action plans provided in the VirtualHome dataset for the same set of tasks. In contrast to the free-form text output by LLMs, the ground-truth action plans from VirtualHome are generated via a graphical programming interface that enforces strict syntax, although annotators were allowed to compose necessary actions.
We show the human evaluation results in Fig 1, where y-axis shows correctness averaged across all tasks and all annotators. Surprisingly, when LLMs are large enough and without imposed syntactic constraints, they can generate highly realistic action plans whose correctness – as deemed by human annotators – even surpasses human-labeled ground-truth. Yet another interesting finding is
3See footnote 2.
that Codex outperforms GPT-3 significantly under the same number of model parameters. One hypothesis could be that by fine-tuning on structured data (docstrings and code), Codex specializes at decomposing a high-level objective into a number of basic operations, even with those operations described in natural language. We also observe some level of correctness for smaller models such as GPT-2. However, inspection of its produced output indicates that it often generates significantly shorter plans by ignoring common-sense actions or by simply rephrasing the given task (e.g. the task “Go to sleep” produces only a single step “Go to bed”). These failure modes sometimes mislead human annotators to mark them correct as the annotators may ignore common-sense actions in their judgment as well, resulting in a higher correctness rate than the quality of the output shows.
4.2 HOW EXECUTABLE ARE THE LLM ACTION PLANS?
We analyze the executability of LLM plans by evaluating them in all 7 household scenes in VirtualHome. As shown in Table 1, we find action plans generated naively by LLMs are generally not very executable. Although smaller models seem to have higher executability, we find that the majority of these executable plans are produced by ignoring the queried task and repeating the given GT example of a different task. This is validated by the fact that smaller models have lower LCS than larger models despite having higher executability, showing that this failure mode is prevalent among smaller models. In contrast, larger models do not suffer severely from this failure mode. Yet as a result of being more expressive, their generated programs are substantially less executable.
4.3 CAN LLM ACTION PLANS BE MADE EXECUTABLE BY ACTION TRANSLATION?
In this section, we evaluate the effectiveness of our proposed procedure of action translation. We first create a bank of all allowed 47522 action steps in the environment, including all possible combinations of atomic actions and allowed arguments/objects. Then we use an off-the-shelf SentenceRoBERTa (Liu et al., 2019; Reimers & Gurevych, 2019) as Translation LM to create embeddings for actions and output text. For better computational efficiency, we pre-compute the embeddings for all allowed actions, leaving minor computation overhead for our procedure over the baseline methods at inference time. As shown in Table 1, executability of generated programs is significantly improved. Furthermore, we also observe improved LCS because the translated action steps precisely follow the program syntax and thus are more similar to ground-truth annotations. One sample output is shown in Fig 1 and a larger random subset of generated samples can be found in Appendix 8.7.
To validate their correctness, we again perform human studies using the same procedure in Sec 4.1. Results are shown in Table 1. We find that despite being more similar to GT, the programs are deemed less correct by humans. By examining the generated output, we observe two main sources of errors. First, we find Translation LM is poor at mapping compounded instructions to a succinct
admissible action. This is partly due to that Translation LM is trained on a much smaller dataset and contains much a smaller number of parameters, so we expect further improvement by using a larger pre-trained model for translation. The second source of error comes from imperfect expressivity of the environment; we find that for many tasks we evaluate, certain necessary actions or objects are not implemented. This is also reflected by out human evaluation results of the GT programs, as only half of the programs are considered complete by the human annotators.
5 ANALYSIS AND DISCUSSIONS
5.1 ABLATION OF DESIGN DECISIONS Methods Executability LCS
Translated Codex 12B 78.57% 24.72% - w/o Action Translation 31.49% 22.53% - w/o Dynamic Example 50.86% 22.84% - w/o Iterative 55.19% 24.43%
Table 2: Ablation of three proposed techniques.
We perform ablation studies to show the effectiveness and necessity of three components of our proposed procedure, each described Sec 3.2, 3.3, and 3.4. As shown in Table 2, leaving out any of the three components would all
lead to decreased performance in both executability and LCS. Notably, not doing action translation leads to the most significant executability drop, showing the importance of action translation in extracting executable action plans from LLMs.
5.2 CAN LLMS GENERATE ACTIONABLE PROGRAMS BY FOLLOWING DETAILED INSTRUCTIONS?
Prior works often focus on translating step-by-step instructions into executable programs. We evaluate LLMs under this setting using a prompt format shown in Appendix 8.2. Although this setting is easier as it does not require rich actionable knowledge, detailed instructions can help resolve much ambiguity of exactly how to perform a high-level task when multiple solutions are possible. Therefore, Translated Codex 12B achieves executability of 78.57% and LCS of 32.87%, where LCS sees a considerable bump from the setting without detailed instructions. Surprisingly, the LCS result is very close to that of a supervised LSTM (Hochreiter & Schmidhuber, 1997) baseline from VirtualHome trained on human-annotated data, which is at 34.00%. Note that since code to train the baseline and the specific train/test split is not publicly released, we only show results reported in Puig et al. (2018) as a reference. We also cannot compare executability as it is not reported.
5.3 IS ACTION TRANSLATION NECESSARY FOR ACTIONABLE KNOWLEDGE GROUNDING?
The investigations of this paper are two-fold: 1) Is actionable knowledge present in LLMs? 2) Can we ground this actionable knowledge in interactive environment? In this section, we focus our attention on the second question by conditioning on the assumption that first question is true. To do this, since successful execution of correct action plans directly measures grounding, we select only the correct plans generated by LLMs and measure how executable they are. We deem an action plan to be correct if 70% or more human annotators decide it is correct.
As shown by Table 3, when an LM is not large enough (e.g. GPT-2), not only it contains little actionable knowledge, but this knowledge cannot be grounded at all. GPT-3 and Codex, on the other hand, can generate highly correct action plans in free-form language. However, they do not have the capability to ground their actionable knowledge in interactive environments. What’s more interesting, by comparing GPT-3 of both 12B parameters and 175B parameters, ratio of executable plans does not improve with the parameter count. This shows that simply training larger models does not necessarily lead to better knowledge grounding. In the meantime, action translation offers a promising way towards grounding actionable knowledge by producing highly executable plans. However, we again note that it comes at a trade-off of producing less correct plans as compared to its vanilla counterpart, and we hope to see future endeavors for bridging the gap.
6 RELATED WORKS
Large-scale natural language modeling has witnessed rapid advances since the inception of the Transformer architecture (Vaswani et al., 2017). It has been shown by recent works that large language models (LLMs) pre-trained on large unstructured text corpus not only can perform strongly on various down-stream NLP tasks (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) but also can internalize an implicit knowledge base containing rich information about the world (Petroni et al., 2019; Jiang et al., 2020; Davison et al., 2019; Talmor et al., 2020; Roberts et al., 2020). Furthermore, the learned representations can be used to model relations of entities (Li et al., 2021), retrieve matching visual features (Ilharco et al., 2020), and even as valuable priors when applied to diverse tasks from different modalities (Lu et al., 2021; Tsimpoukelli et al., 2021). Compared to prior works in knowledge extraction that extract single-step factual answers memorized by the models (e.g. “Dante was born in [PLACE]”), we aim to extract sequential action plans to complete an open-ended human activity (e.g. “make breakfast”). We further require these plans to only contain allowed actions and satisfy the pre/post-conditions of actions in order to be executed by an embodied agent.
At the same time, there has also been growing interest and development in grounding language in embodied environment. A series of prior works have investigated the possibility of parsing language instructions into formal logic to resolve various linguistic ambiguities for embodied agents (Artzi & Zettlemoyer, 2013; Misra et al., 2015; Tenorth et al., 2010). However, they often scale poorly to complex tasks and environments. Recently, more research efforts have been put into creating better and more realistic environments with the goal to further advances in this area (Puig et al., 2018; Shridhar et al., 2020a;b; Kolve et al., 2017; Savva et al., 2019). At the same time, by leveraging the better representation power of neural architectures, a number of works have looked into creating instruction-following agents that can perform manipulation (Lynch & Sermanet, 2020), navigation (Majumdar et al., 2020), or both (Suglia et al., 2021; Hill et al., 2020).
Notably, most of these prior works do not leverage full-blown pre-trained LLMs (Suglia et al., 2021) or do not scale to complex human activities (Hill et al., 2020; Lynch & Sermanet, 2020). Perhaps more importantly, few works have evaluated LLMs in an embodiment setting that realizes the full potential of the world knowledge these models contain: the tasks evaluated are often “pick”, “grab”, “open”, and etc, which do not resemble the highly diverse activities that humans perform in daily lives. The development of VirtualHome environment Puig et al. (2018) enables such possibility. However, relevant works (Puig et al., 2020; Liao et al., 2019) rely on human-annotated data and perform supervised training from scratch. Due to the lack of rich world knowledge, these models can only generate action plans given step-by-step instructions of how to act or video demonstrations. In this work, we take a step further by conditioning only on the high-level descriptions and by extracting executable action plans from LLMs without any additional training.
7 LIMITATIONS AND CONCLUSION
There are several notable limitations of this work. First, although our approach presents a viable way to ground world knowledge in embodied environments, it is still a trade-off rather than one best solution since we observe considerable drop in correctness. Second, we focus on high-level to midlevel grounding, assuming there is a controller that can execute mid-level tasks (such as “grab cup”). Our work does not investigate usefulness of LLMs for low-level sensorimotor behavior grounding. The third limitation is that we do not incorporate observation context or feedback into our models. To some extent, we approach LLMs in the same way as how VirtualHome asks human annotators to give action plans for a given huamn activity by imagination, in which case the human-generated action plans also do not incorporate observation context. However, we do see incorporating observation context for complex activities as an exciting future direction.
8 APPENDIX
8.1 EXAMPLE PROGRAM IN VIRTUALHOME
8.2 EXAMPLE PROMPT CONTAINING STEP-BY-STEP INSTRUCTIONS
8.3 HYPERPARAMETER SEARCH
For each evaluated method, we perform grid search over the following hyperparameters:
Name Description Search Values epsilon ( ) OOD cutoff threshold used in iterative action
translation {0, 0.4, 0.8}
temperature sampling parameter adjusting relative probabilities across tokens {0.1, 0.3, 0.6}
k number of samples generated when querying LMs each time {1, 10}
frequence penalty OpenAI API specific; penalize new tokens based on their existing frequency in the text so far {0.1, 0.3, 0.6, 0.9}
presence penalty OpenAI API specific; penalize new tokens based on whether they appear in the text so far {0.3, 0.5, 0.8}
repetition penalty Hugging Face Transformers specific; penalize new tokens based on whether they are repeating existing text {1.0, 1.2, 1.5, 1.8}
For methods that use fixed example across evaluated tasks, we search over the following three randomly chosen examples:
Example 1 Example 2 Example 3 Task: Use computer Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Relax on sofa Step 1: Walk to home office Step 2: Walk to couch Step 3: Find couch Step 4: Sit on couch Step 5: Find pillow Step 6: Lie on couch Task: Read book Step 1: Walk to home office Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find chair Step 6: Sit on chair Step 7: Read novel
8.4 ANALYSIS OF PROGRAM LENGTH
Shorter programs have a natural advantage of being more executable. Consider a task “wash hands” and a corresponding program that only commands an agent to “go to bathroom” without additional steps. The program is obviously incorrect yet trivially executable. To validate our approach does not simply generate very short program, we calculate the average program length across the 88 evaluated tasks. Results are shown in Table 6. In addition to the failure mode discussed in Section 4.2 that leads to incorrect yet executable programs, smaller LMs such as GPT-2 also generate programs significantly shorter than larger models, making them more executable. In contrast, larger models like Codex-12B generate more expressive program of high correctness, but they often suffer from executability. We show action translation can lead to benefits of both worlds, generating programs that are highly executable while maintaining similar expressiveness in terms of program length.
8.5 DETAILS OF HUMAN EVALUATIONS
Human evaluations are conducted on Amazon Mechanical Turk. For each method, we generate action plans for all 88 high-level tasks. To account for expressivity of the VirtualHome environment (Puig et al., 2018), we further include ground-truth action plans from the VirtualHome dataset in our human evaluations. The human evaluations are conducted in the form of questionnaires containing all action plans with unknown corresponding methods. The questionnaire contains the following instructions at the top:
For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps. In other words, can the task be decomposed into these steps? Note that simply re-stating the task does not mean completing it.
Human annotators are required to answer all the questions in the questionnaire, where each question is an action plan generated by a method unknown to the annotator. The order of the questions is randomly permuted before presented to each annotator. For each question, the annotators need to answer either “Yes” or “No” indicating if they believe the action plan completes the task. For each method, we report correctness percentage averaged across 10 participated human annotators.
8.6 DETAILS OF VIRTUALHOME TASKS
VirtualHome ActivityPrograms is a knowledge base collected on Mechanical Turk by Puig et al. (2018). The knowledge base features real household activities of humans and correspondingly sequences of basic actions for robots to perform these activities. Each entry contains a high-level task name (i.e. “Watch TV”), a mid-level detailed description (i.e. “Sit on my couch directly opposite my TV, switch on my TV with the remote control and watch”), and a sequence of allowed actions (a.k.a. a program) to perform the task in VirtualHome. There are 292 distinct high-level tasks and a total of 2821 different entries in the knowledge base, where each high-level task may correspond one or several programs. The knowledge base covers a diverse set of real human activities and a wide range of different actions and objects. Puig et al. (2018) also conduct additional human evaluation on 100 randomly sampled programs and show that 64% of them are complete (i.e. contain all necessary steps). Readers are encouraged to refer to Puig et al. (2018) for more comprehensive analysis of the dataset.
Train/Test Split From all 292 distinct high-level tasks from the ActivityPrograms knowledge base, we randomly sample 88 high-level tasks for evaluation. The remaining 204 tasks are used as demonstration set from which we are allowed to select as example(s) for prompting language models, or in the case of supervised fine-tuning, they are used to fine-tune pre-trained language models.
8.7 RANDOM SAMPLES OF GT, VANILLA CODEX, AND TRANSLATED CODEX
Annotated Ground-Truth Codex Translated Codex (Ours) Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Turn to bed Step 5: Look at bed Step 6: Sit on bed Step 7: Sleep Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find pillow Step 6: Sleep on pillow Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Lie on bed Step 4: Sleep
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer
Task: Listen to music Step 1: Walk to bedroom Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Pull chair Step 9: Sit on chair Step 10: Find headset Step 11: Grab headset Step 12: Put on headset Step 13: Switch on computer Step 14: Find keyboard Step 15: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to computer Step 3: Sit on chair Step 4: Switch on computer Step 5: Find music stand
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to tooth paste Step 3: Find tooth paste Step 4: Grab tooth paste Step 5: Find toothbrush Step 6: Grab toothbrush
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to sink Step 3: Find toothbrush Step 4: Pick up toothbrush Step 5: Put toothbrush in mouth Step 6: Move brush around mouth for two minutes Step 7: Spit out toothpaste and brush into sink Step 8: Turn on water in sink and rinse brush for one minute Step 9: Turn off water in sink and return brush to cupboard Task: Brush teeth Step 1: Walk to bathroom Step 2: Open door Step 3: Walk to sink Step 4: Put pot on sink Step 5: Put brush on toothbrush Step 6: Turn to toothpaste Step 7: Put toothpaste on toothbrush Step 8: Put teeth on toothbrush
Task: Dry soap bottles Step 1: Find towel Step 2: Find soap Step 3: Grab soap Step 4: Wipe soap Step 5: Put back soap Step 6: Find shampoo Step 7: Grab shampoo Step 8: Wipe shampoo Step 9: Put back shampoo Step 10: Find shampoo Step 11: Grab shampoo Step 12: Wipe shampoo Step 13: Put back shampoo
Task: Dry soap bottles Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on water tap Step 5: Wait for water to be hot enough Step 6: Fill container with hot water Step 7: Find soap bottles Step 8: Pour soap into container with hot water Step 9: Wait for soap to dissolve in water Step 10: Turn off tap and empty container into sink Task: Dry soap bottles Step 1: Walk to bathroom Step 2: Walk to shower Step 3: Find soap
Task: Vacuum Step 1: Walk to home office Step 2: Walk to computer Step 3: Find vacuum cleaner Step 4: Switch on vacuum cleaner Step 5: Switch off vacuum cleaner Task: Vacuum Step 1: Walk to hallway Step 2: Walk to closet Step 3: Find closet Step 4: Open closet Step 5: Find vacuum cleaner Step 6: Turn on vacuum cleaner Step 7: Walk to living room Step 8: Vacuum floor in living room Task: Vacuum Step 1: Walk to home office Step 2: Walk to dresser Step 3: Open dresser Step 4: Find vacuum cleaner Step 5: Grab vacuum cleaner Step 6: Pull vacuum cleaner Step 7: Plug in vacuum cleaner Step 8: Switch on vacuum cleaner Step 9: Pull vacuum cleaner Step 10: Push vacuum cleaner
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to bookshelf Step 3: Turn to bookshelf Step 4: Look at bookshelf Step 5: Turn to bookshelf Step 6: Look at bookshelf Step 7: Turn to bookshelf Step 8: Look at bookshelf Step 9: Turn to bookshelf Step 10: Look at bookshelf Step 11: Turn to bookshelf Step 12: Look at bookshelf Step 13: Turn to bookshelf Step 14: Look at bookshelf Step 15: Find novel Step 16: Grab novel
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find dictionary Step 6: Open dictionary Step 7: Look at dictionary
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to book Step 3: Find textbook
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Switch off computer Step 7: Find brush Step 8: Turn to brush Step 9: Look at brush Step 10: Grab brush Step 11: Wipe computer Step 12: Drop brush Step 13: Find napkin Step 14: Grab napkin Step 15: Wash napkin Step 16: Wipe computer
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Wash monitor Step 1: Walk to check Step 2: Walk to desk
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Open newspaper Step 8: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Open newspaper Step 7: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Read newspaper
Task: Apply lotion Step 1: Find bed Step 2: Turn to bed Step 3: Point at bed Step 4: Wash bed Step 5: Turn to bed Step 6: Look at bed Step 7: Push bed Step 8: Sit on bed
Task: Apply lotion Step 1: Walk to bathroom Step 2: Walk to cabinet Step 3: Open cabinet Step 4: Find lotion Step 5: Apply lotion on hands Step 6: Rub hands together for 10 seconds Step 7: Apply lotion on face Task: Apply lotion Step 1: Walk to bathroom Step 2: Turn to lotionbottle Step 3: Grab lotionbottle
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find desk Step 6: Open desk Step 7: Find check Step 8: Grab check Step 9: Put check on desk Step 10: Find pencil Step 11: Grab pencil Step 12: Close desk
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Draw picture Step 1: Walk to home office Step 2: Walk to wall Step 3: Walk to drawing Step 4: Find drawing Step 5: Turn to drawing Step 6: Look at drawing Step 7: Touch drawing Step 8: Pull drawing Step 9: Push drawing
Task: Prepare pot of boiling water Step 1: Walk to dining room Step 2: Walk to pot Step 3: Find pot Step 4: Grab pot Step 5: Walk to sink Step 6: Find faucet Step 7: Switch on faucet Step 8: Put pot on sink Step 9: Grab pot Step 10: Switch off faucet Step 11: Grab pot Step 12: Walk to oven Step 13: Put pot on oven Step 14: Switch on oven Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on tap Step 5: Fill kettle with water Step 6: Put kettle on stove Step 7: Turn on stove Step 8: Wait for water to boil Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to cabinet Step 3: Find pot Step 4: Grab pot Step 5: Walk to stove Step 6: Find stove Step 7: Switch on stove
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Look at mirror
Task: Look at mirror Step 1: Walk to bedroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Wipe down sink Step 1: Walk to dining room Step 2: Walk to rag Step 3: Find rag Step 4: Grab rag Step 5: Walk to sink Step 6: Wipe sink Step 7: Put back rag
Task: Wipe down sink Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Wipe down sink Step 5: Return to kitchen
Task: Wipe down sink Step 1: Walk to bathroom Step 2: Walk to cleaning solution Step 3: Find cleaning solution Step 4: Grab cleaning solution Step 5: Find sink Step 6: Pour cleaning solution into glasses Step 7: Find faucet Step 8: Switch on faucet Step 9: Scrub sink Step 10: Wash sink
Task: Complete surveys on amazon turk Step 1: Walk to chair Step 2: Sit on chair Step 3: Find computer Step 4: Switch on computer Step 5: Turn to computer Step 6: Look at computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer Step 11: Find check Step 12: Grab check Step 13: Read check Step 14: Put back mouse Step 15: Find keyboard Step 16: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer
Task: Write book Step 1: Walk to home office Step 2: Walk to laptop Step 3: Find laptop Step 4: Grab laptop Step 5: Find electrical outlet Step 6: Plug in laptop Step 7: Walk to love seat Step 8: Sit on love seat Step 9: Switch on laptop Step 10: Find keyboard Step 11: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to light Step 3: Find light Step 4: Switch on light Step 5: Find pen Step 6: Grab pen Step 7: Find chair Step 8: Sit on chair Step 9: Turn to novel
Task: Paint ceiling Step 1: Find drawing Step 2: Turn to drawing Step 3: Point at drawing Step 4: Watch drawing Step 5: Touch drawing Step 6: Walk to bedroom
Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find paint can Step 6: Open paint can Step 7: Pour paint into paint can Step 8: Close paint can Step 9: Find roller Step 10: Roll paint onto ceiling Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to ceiling Step 3: Walk to painting Step 4: Find painting Step 5: Turn to painting Step 6: Look at painting Step 7: Touch painting Step 8: Pull painting Step 9: Push painting
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Open dresser Step 4: Find dress Step 5: Grab dress Step 6: Put on dress Step 7: Close dresser
Task: Change clothes Step 1: Walk to closet Step 2: Open closet Step 3: Find clothes Step 4: Put on clothes Step 5: Close closet
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find dresser Step 4: Open dresser Step 5: Find pants Step 6: Grab pants Step 7: Put on pants Step 8: Find shirt Step 9: Grab shirt Step 10: Put on shirt Step 11: Close dresser
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find sheets Step 4: Grab sheets Step 5: Pull sheets
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find sheet Step 6: Pick up sheet Step 7: Find pillow Step 8: Pick up pillow Step 9: Find blanket Step 10: Pick up blanket
Task: Make bed Step 1: Stand up Step 2: Walk to bedroom Step 3: Walk to bed Step 4: Turn to bed Step 5: Put sheets on bed
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find bed Step 6: Sit on bed Step 7: Find bookmark Step 8: Grab bookmark Step 9: Read novel Step 10: Put bookmark on novel Step 11: Sleep Task: Read yourself to sleep Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to book Step 3: Sit on chair Step 4: Read check Step 5: Sleep
Task: Hang keys Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find purse Step 4: Grab purse Step 5: Open purse Step 6: Find keys Step 7: Grab keys Step 8: Find bowl Step 9: Put keys on bowl Step 10: Close purse Step 11: Put back purse
Task: Hang keys Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Hang keys Step 1: Walk to home office Step 2: Walk to table Step 3: Find keys Step 4: Grab keys Step 5: Walk to wall Step 6: Find hanger Step 7: Put keys on hanger
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to freezer Step 3: Open freezer Step 4: Find cheese Step 5: Grab cheese Step 6: Eat cheese
Task: Eat cheese Step 1: Walk to fridge Step 2: Open fridge Step 3: Find cheese Step 4: Pick up cheese Step 5: Walk to table Step 6: Place cheese on table Step 7: Close fridge Step 8: Walk to chair Step 9: Sit on chair Step 10: Eat cheese
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to table Step 3: Find table Step 4: Turn to table Step 5: Find chair Step 6: Sit on chair Step 7: Find food Step 8: Grab food Step 9: Find plate Step 10: Put food on plate | 1. What is the focus of the paper regarding large language models?
2. What are the strengths of the proposed approach, particularly in terms of evaluating the capability of pre-trained language models?
3. What are the weaknesses of the paper, especially regarding the choice of language models investigated?
4. Do you have any concerns regarding the translation LM used to overcome restrictions in interactive environments?
5. Are there any typos or minor errors in the review that should be addressed? | Summary Of The Paper
Review | Summary Of The Paper
This paper investigates whether large language models (LLMs) are capable, without additional training, of decomposing high-level tasks into a sequence of instructions (i.e., a plan) and grounding them in an embodied environment. Specifically, they relied on prompt engineering to provide enough context to the LLMs for generating sensible instructions. The authors also suggest using another language model to translate generated instructions into actions that are parsable by the embodied environment.
Through a series of experiments, the authors empirically show that larger LLMs are able to produce sensible plans (according to human annotators) that can also be mapped to executable actions within an embodied environment. In their experiments, the authors evaluate GPT-2, GPT-3, and Codex (including different sizes for the GPT-* family). Part of the paper is also dedicated to answering several hypotheses related to the investigation.
Review
What I like about this paper
Rather than trying to solve a particular task using large language models, this paper focuses on analyzing what is the capability of pre-trained language models in terms of decomposing high-level tasks into a sequence of instructions. In other words, what kind of actionable knowledge, if any, is stored in those large language models?
Using two axes of evaluation - executability, and correctness - better characterize the strengths and limitations of the language models studied in this paper. Looking at the discrepancy between the correctness, as reported by the human annotators, and the executability of a plan motivates the need for a translation LM. It can overcome some of the restrictions interactive environments have, i.e. their action space contains instructions with a specific syntax.
I like the suggestion of using another language to translate generated instructions to grounded actions to boost executability. Also, it makes sense to me to use an autoregressive trajectory correction approach to improve the executability of a plan.
Concerns
My main concern lies with the type of language models used in this analysis. Other than being really popular, it is not clear why the authors decided to only investigate models in the GPT-* family? Is it that other large language models (e.g., T5) are less amenable to prompt-engineering? To me, the main message/claims of the paper seem to be about large language models in general. It would be interesting to know if actionable knowledge is only present in GPT-* models or not.
Typos
p.7: "We find that despite [being] more similar ..." - Missing word.
p.7: "...; we find that for [a] many tasks we ..." - Out-of-place word.
p.7: "... by [our -> the] human annotators" - Suggested change. |
ICLR | Title
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Abstract
Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (i.e. “make breakfast”), to a chosen set of actionable steps (i.e. “open fridge”). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models1.
1 INTRODUCTION
Large language models (LLMs) have made impressive advances in language generation and understanding in recent years (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). See Bommasani et al. (2021) for a recent summary of their capabilities and impacts. Being trained on large corpora of human-produced language, these models are thought to contain a lot of information about the world (Roberts et al., 2020; Li et al., 2021; BIG-bench collaboration, 2021) - albeit in linguistic form.
We ask whether we can use such knowledge contained in LLMs not just for linguistic tasks, but to make goal-driven decisions to that can be enacted in interactive, embodied environments. But we are not simply interested in whether we can train models on a dataset of demonstrations collected for some specific environment – we are instead interested in whether LLMs already contain information necessary to accomplish goals without any additional training.
More specifically, we ask whether world knowledge about how to perform high-level tasks (such as “make breakfast”) can be expanded to a series of groundable actions (such as “open fridge”, “grab milk”, “close fridge”, etc) that can be executed in the environment. For our investigation, we use recently proposed VirtualHome environment (Puig et al., 2018). It can simulate a large variety of realistic human activities in a household environment and supports ability to perform them via embodied actions defined with a verb-object syntax. However, due to open-ended nature of the tasks, it is difficult to autonomously evaluate their success. We rely on human evaluation (conducted on Mechanical Turk) to decide whether sequences of actions meaningfully accomplish posed tasks.
We find that large GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) models, when prompted with a single fixed example of a task description and its associated sequence of actions, can produce very plausible action plans for the task we’re interested in. Such completions reflect the information already stored in the model – no model fine-tuning is involved. Additionally, we only observe this
1Results and videos at https://sites.google.com/view/language-model-as-planner
effect in the larger models. Unfortunately, despite their semantic correctness, the produced action plans are often not executable in the environment. Produced actions may not map precisely to admissible actions, or may contain various linguistic ambiguities.
We propose several tools to improve executability of the model’s outputs. First, we enumerate all admissible action phrases and map the model’s output action to the most semantically-similar admissible action (we use similarity measure between sentence embeddings produced by a RoBERTa model Liu et al. (2019) in this work, but other choices are possible). Second, we use the model to autoregressively generate actions in a plan by conditioning past actions that have been made admissible via the technique above. Such on the fly correction can keep generation anchored to admissible actions. Third, we provide weak supervision to the model by prompting the model with a known task example similar to the query task. This is somewhat reminiscent of prompt tuning approaches, but does not require access to gradients or internals of the model.
Using above tools to bias model generation, we find that we improve executability of instructions from 18% to 79% (see Figure 1) without any invasive modifications to model parameters or any extra gradient or internal information beyond what is returned from the model’s forward pass. This is advantageous because it does not require any modifications to model training procedure and can fit within existing model serving pipelines. However, we do find there to be a significant drop in correctness of the instruction sequences generated with above tools (as judged by humans), indicating a promising step, but requiring more research on the topic.
To summarize, our paper’s contributions are as follows:
• We show that without any training, large language models can be prompted to generate plausible goal-driven action plans, but such plans are frequently not executable in interactive environments.
• We propose several tools to improve executability of the model generation without invasive probing or modifications to the model.
• We conduct a human evaluation of multiple techniques and models and report on the tradeoffs between executabiltiy and semantic correctness.
2 EVALUATION FRAMEWORK
Simulating open-ended tasks that resemble naturalistic human activities requires an environment to support a rich set of diverse interactions, rendering most existing embodied environments unsuitable for our investigation. One exception is VirtualHome (Puig et al., 2018), which models human activities in a typical household. Therefore, we only provide evaluation in this environment. To further measure correctness given open-ended tasks, we conduct a human evaluation. We note that since no further training is involved throughout our investigations, the observations and findings presented in this paper should also translate to similar embodied environments.
2.1 EVALUATED ENVIRONMENT: VIRTUALHOME
Preliminaries In VirtualHome, activities are expressed as programs. Each program consists of a sequence of steps, where each step is written as: [action] 〈arg1〉(id1) ... 〈argn〉(idn). Each action refers to atomic actions such as “walk”, “open”, and “put”. A total of 45 atomic actions are supported by VirtualHome. Different actions take in different numbers of arg necessary for specifying an interaction. Associated with each arg is a unique id specifying the corresponding node in the environment graph, in case of multiple instances of the same object class are present in the graph. For the sake of simplicity, we omit the id in the remaining discussions of this paper and allow automatic assignment by the environment. An example program is shown in Appendix 4.
Evaluated Tasks We use the knowledge base collected by VirtualHome for evaluation. The knowledge base contains household activities crowd-sourced from Amazon Mechanical Turk (MTurk). The MTurk workers were asked to provide natural language descriptions of daily household activities and all actionable steps necessary for completing the activities. The descriptions are both given as high-level task descriptions and step-by-step instructions. We omit the use of stepby-step instructions in this work as we desire direct extraction of executable programs from only task descriptions. For evaluations, we randomly sample a subset of 88 high-level tasks, each having one or more annotated ground-truth programs. The remaining 204 tasks are used as demonstration set, from which we are allowed to select as example(s) for prompting language models. Note that no training or fine-tuning is performed using these tasks and their annotations. More details of the evaluated tasks can be found in Appendix 8.6.
2.2 METRICS
A program that commands the agent to wander around in a household environment is highly executable but may not complete the desired task. On the other hand, a program composed of step instructions from knowledge bases can likely complete the task but cannot be executed. The reason is that free-form instructions can be ambiguous and may lack necessary common-sense actions. To this end, we consider two axes for evaluation: executability and correctness.
Executability Executability measures whether an action plan can be correctly parsed and satisfies the common-sense constraints of the environment. To be correctly parsed, an action plan must be syntatically correct and contain only allowed actions and recognizable objects. To satisfy the common-sense constraints, each action step must not violate the set of its pre-conditions (e.g. the agent cannot grab milk from the fridge before opening it) and post-conditions (e.g. the state of the fridge changes from “closed” to “open” after the agent opens it). We report the average executability across all 88 tasks and across all 7 VirtualHome scenes.
Correctness Unlike most embodied environments where the completion of a task can be easily judged, the ambiguous and multimodal nature of natural language task specification makes it impractical to obtain a gold-standard measurement of correctness. One approach could be measuring similarity of the final environment state produced by executing predicted and ground-truth programs, but VirtualHome initializes an environment differently based on the to-be-executed program, making comparisons difficult if measured in such way. Therefore, we conduct human evaluation for the highlighted methods. More details of the human evaluations can be found in Appendix 8.5. For the remaining methods and ablation studies, we rely on a match-based metric that measures how similar a generated program is to human annotations. Specifically, we follow Puig et al. (2018) and calculate the longest common subsequence (LCS) between two programs, normalized by the maximum length of the two. In the presence of multiple ground-truth programs for a single task, we take the maximum LCS across the ground-truth programs. However, we note that the majority of the tasks only have one ground-truth annotation, but there are often many plausible ways to complete a certain task, making this metric imperfect at evaluation program correctness2. Although correlation between the two is shown by Puig et al. (2018), we consider it only as a proxy metric in replacement of unscalable human evaluation.
2Although LCS has a mathematical range of [0, 1], we measure the LCS between different ground-truth programs for the same task and find an empirical maximum of 0.489.
Task: Shave Step 1: Grab razor Step 2: Switch on razor Step 3: Put razor on face
Task: Apply lotion
Pre-Trained Causal LLM Frozen
Pre-Trained Masked LLM
Frozen
Step 1: Squeeze out a glob of lotion Step 1: Pour lotion into right hand
Step 1: Squeeze out a glob of lotion Task: ShaveStep 1: Grab razor Step 2: Wash razor Step 3: Switch on razor
Task: Apply lotion Step 1: Pour lotion into right hand Step 2:
Pre-Trained Causal LLM Frozen
Zero-Shot Planning via Causal LLM Translation to Admissible Action Step-By-Step Autoregressive Generation
Prompt Prompt
Figure 2: We investigate the possibility of extracting actionable knowledge from pre-trained language models without any additional training. We first show surprising finding that large language models (LLMs) can decompose high-level tasks into sensible low-level action plans (left). To make the action plans executable, we propose to translate each step into admissible action via another LLM (middle). The translated action is appended to the original prompt used for generating the remaining steps (right).
3 METHOD
In this section, we investigate the possibility of extracting actionable knowledge from pre-trained language models without further training. We first give an overview of the common approach to query large language models (LLMs) and how it may be used for embodied agents. Then we describe an inference-time procedure that addresses several deficiencies of the LLM baseline and offers better executability in embodied environments. We break down the proposed procedure into three individual components, each discussed in Sec 3.2, 3.3, and 3.4.
Since LMs excel at dealing with natural language text instead of the specific format required by VirtualHome as described in Section 2.1, we only expose natural language text to LMs. To do this, we define a mapping for each atomic action that parses a natural language phrase to the required format. For instance, “Walk to living room” is converted to “[Walk] 〈living room〉(1)”. When an LM output cannot be parsed to any of the allowed action, the entire program is considered syntactically incorrect and thus not executable.
3.1 PROMPTING
Previous works have shown that large language models pre-trained on a colossal amount of data contain useful world knowledge that can be probed to perform various down-stream tasks (Radford et al., 2019; Brown et al., 2020). Notably, autoregressive LLMs can even perform in-context learning, an approach to solve tasks using only contextual information without gradient updates (Brown et al., 2020). Contextual information is given as part of the input prompt and LMs are asked to complete the remaining text. It often consists of natural language instructions and/or a number of examples containing the desired input/output pairs.
We adopt the same approach to query LLMs to generate action plans for high-level tasks. Specifically, we prepend one example task description sentence and its annotated action plan from the demonstration set to the query task description, as shown in Fig 2. To obtain text completion results, we sample from autoregressive LLM using temperature sampling and nucleus sampling (Holtzman et al., 2019). We refer to this LM as Planning LM and the approach using this LM for plan generation as Vanilla [LM], where [LM] is replaced by specific language model such as GPT-3 or Codex.
To further improve the quality of the generated output, we follow Chen et al. (2021) that uses LMs for program synthesis to sample multiple output for each task. However, unlike prior works in program synthesis that choose the sample with highest unit test pass rate, we only consider the setting where one sample is allowed to be evaluated for each task. This is because repetitive trialand-errors can be dangerous in the real world, and executing many action plans is equivalent to probing the environment for privileged information, which is often considered not viable.
3.2 ROBUST PARSING BY SEMANTIC TRANSLATION
One issue arises when naively following the above approach to generate action plans for high-level tasks: the action plan is often not executable because LMs are allowed to generate free-form text.
Therefore, most of the time the output cannot be mapped to one unambiguous actionable step. And many reasons can cause such failures: 1) the output does not follow pre-defined mappings of any atomic action (i.e. “I first walk to the bedroom” does not follow “Walk to 〈PLACE〉”), 2) the output may refer to atomic action and objects using words unrecognizable by the environment (i.e. “Clean the dirty dishes in the sink” where “clean” and “dirty dishes in the sink” cannot be mapped to precise action and object), 3) the output contains lexical ambiguous words (i.e. “Open TV” should instead be “Switch on TV”), or 4) the output may use disallowed action (i.e. “Microwave the cup”).
Instead of developing a set of rules to transform the free-form text into admissible action steps, we propose to again leverage world knowledge learned by large language models to semantically translate the action. For each step in the action plan â, we aim to find the most similar admissible environment action ae as measured by cosine similarity:
argmax ae f(â) · f(ae) ‖f(â)‖‖f(ae)‖ where f is an embedding function.
To embed the output text and environment actions, we use a BERT-style LM (Devlin et al., 2018; Liu et al., 2019) trained with Sentence-BERT (Reimers & Gurevych, 2019) objective because of its suitability for sentence modeling. The sentence embedding is obtained by mean-pooling the last layer hidden states across all tokens. We refer to this LM as Translation LM. Note that this is a different LM than the GPT-style Planning LM discussed in the text so far. Using a single LM for both purposes could as well be possible and likely more efficient, but we leave such investigation to future works. While the set of actions in our environment is discrete and possible to exhaustively enumerate, sampling or projection can be employed in larger discrete or continuous action spaces.
Since Translation LM can guarantee the parsed action is allowed by the environment, we can tradeoff semantic soundness of an LM step by how likely it can be mapped to an admissible action in the environment. This can be achieved by a simple modification to the scheme that we use to choose the best sample from the LM output. Instead of only using mean token log probability as a ranking metric, we choose the sample with the highest score calculated as s = C + β · logprob, where C is the cosine similarity to the closest allowed action and β is a weighting parameter.
3.3 AUTOREGRESSIVE TRAJECTORY CORRECTION
Translating each step of the program after the entire program has been synthesized is analogous to open-loop planning and is subject to compounding errors. In practice, LLMs might output compounded instructions for a single step, even though it cannot be completed using one admissible action in the environment. To this end, we can instead interleave plan generation and action translation to allow for automatic trajectory correction. At each step, we first query Planning LM to generate k samples for a single action. Then we calculate score s for each sample using Translation LM and append the translated action to the unfinished text completion. This way all subsequent steps will be conditioned on admissible actions instead of free-form text output generated by Planning LM. Furthermore, we can use Translation LM to detect out-of-distribution actions, those outside the capabilities of a robot, and terminate a program early instead of mapping to a faulty action. This can be easily implemented by setting a threshold such that if Ctmax < at step t, the program is terminated early. We empirically show this leads to better executability while maintaining similar correctness of the generated action plans.
3.4 DYNAMIC EXAMPLE SELECTION FOR IMPROVED KNOWLEDGE EXTRACTION
So far in the text, we always give the same example in the prompt for all evaluated high-level tasks. However, consider the task of “ordering pizza”. Prompting LLMs with this task may give the assumption that the agent is initialized in front of a computer, and the LLMs may guide the agent to search for a pizza store and click “checkout my cart”. Although these are reasonable and feasible in the real world, such assumption cannot always be made as these interactions may not be supported in simulated environments like VirtualHome. In fact, the closest series of actions that human experts give may be “walking to a computer”, “switching on the computer”, and “typing the keyboard”. Without being finetuned on these data, LLMs would often fail at these tasks. To provide weak supervision at inference time, we propose to use Translation LM to select the most similar task from the demonstration set to be used as the example in the prompt. Specifically, we choose the task
whose high-level description matches closely the query task as measured by cosine similarity. This allows Planning LM to reason how to perform a similar task given a human-annotated example. In our evaluations, we make sure the demonstration set is not overlapping with our test queries.
Combining the various improvement discussed above, we refer to the final approach as Translated [LM], where [LM] is replaced by specific language model used such as GPT-3 and Codex.
4 RESULTS
In this section, we first show that language models can generate sensible action plans for many highlevel tasks, even without any additional training. Then we highlight its inadequacy when naively applied to embodied environments and demonstrate how this can be improved by again leveraging world knowledge learned by LLMs. Visualization of generated programs are shown in Fig 3.
Sampling from LMs Pre-trained LMs are sensitive to sampling parameters and the specific example given in the prompt. For all evaluated methods, we perform hyper-parameter search over various sampling parameters, and for methods using fixed prompt example, we report metrics averaged across three randomly chosen examples. To select the best run for each method, we rank the runs by LCS + executability, each normalized by human-expert scores3. Further details can be found in Appendix 8.3.
Model Choices We find empirically the combination of Codex-12B and Sentence-RoBERTa355M work well in our setting. Codex and GPT-3 are accessed using OpenAI API. The remaining models are accessed through open-source packages, Hugging Face Transformers (Wolf et al., 2019) and SentenceTransformers (Reimers & Gurevych, 2019), without additional modifications.
4.1 DO LLMS CONTAIN ACTIONABLE KNOWLEDGE FOR HIGH-LEVEL TASKS?
We first investigate whether LLMs can generate sensible action plans expressed in free-form language. We use the approach described in Section 3.1 to query pre-trained LLMs. To evaluate the correctness of generated action plans, we conduct human evaluations. For each model, we ask 10 human annotators to determine – by answering “Yes” or “No” – whether each task can be completed using provided action steps. To provide a reference of how humans might rate the action plans provided by other humans, we also ask annotators to rate the ground-truth action plans provided in the VirtualHome dataset for the same set of tasks. In contrast to the free-form text output by LLMs, the ground-truth action plans from VirtualHome are generated via a graphical programming interface that enforces strict syntax, although annotators were allowed to compose necessary actions.
We show the human evaluation results in Fig 1, where y-axis shows correctness averaged across all tasks and all annotators. Surprisingly, when LLMs are large enough and without imposed syntactic constraints, they can generate highly realistic action plans whose correctness – as deemed by human annotators – even surpasses human-labeled ground-truth. Yet another interesting finding is
3See footnote 2.
that Codex outperforms GPT-3 significantly under the same number of model parameters. One hypothesis could be that by fine-tuning on structured data (docstrings and code), Codex specializes at decomposing a high-level objective into a number of basic operations, even with those operations described in natural language. We also observe some level of correctness for smaller models such as GPT-2. However, inspection of its produced output indicates that it often generates significantly shorter plans by ignoring common-sense actions or by simply rephrasing the given task (e.g. the task “Go to sleep” produces only a single step “Go to bed”). These failure modes sometimes mislead human annotators to mark them correct as the annotators may ignore common-sense actions in their judgment as well, resulting in a higher correctness rate than the quality of the output shows.
4.2 HOW EXECUTABLE ARE THE LLM ACTION PLANS?
We analyze the executability of LLM plans by evaluating them in all 7 household scenes in VirtualHome. As shown in Table 1, we find action plans generated naively by LLMs are generally not very executable. Although smaller models seem to have higher executability, we find that the majority of these executable plans are produced by ignoring the queried task and repeating the given GT example of a different task. This is validated by the fact that smaller models have lower LCS than larger models despite having higher executability, showing that this failure mode is prevalent among smaller models. In contrast, larger models do not suffer severely from this failure mode. Yet as a result of being more expressive, their generated programs are substantially less executable.
4.3 CAN LLM ACTION PLANS BE MADE EXECUTABLE BY ACTION TRANSLATION?
In this section, we evaluate the effectiveness of our proposed procedure of action translation. We first create a bank of all allowed 47522 action steps in the environment, including all possible combinations of atomic actions and allowed arguments/objects. Then we use an off-the-shelf SentenceRoBERTa (Liu et al., 2019; Reimers & Gurevych, 2019) as Translation LM to create embeddings for actions and output text. For better computational efficiency, we pre-compute the embeddings for all allowed actions, leaving minor computation overhead for our procedure over the baseline methods at inference time. As shown in Table 1, executability of generated programs is significantly improved. Furthermore, we also observe improved LCS because the translated action steps precisely follow the program syntax and thus are more similar to ground-truth annotations. One sample output is shown in Fig 1 and a larger random subset of generated samples can be found in Appendix 8.7.
To validate their correctness, we again perform human studies using the same procedure in Sec 4.1. Results are shown in Table 1. We find that despite being more similar to GT, the programs are deemed less correct by humans. By examining the generated output, we observe two main sources of errors. First, we find Translation LM is poor at mapping compounded instructions to a succinct
admissible action. This is partly due to that Translation LM is trained on a much smaller dataset and contains much a smaller number of parameters, so we expect further improvement by using a larger pre-trained model for translation. The second source of error comes from imperfect expressivity of the environment; we find that for many tasks we evaluate, certain necessary actions or objects are not implemented. This is also reflected by out human evaluation results of the GT programs, as only half of the programs are considered complete by the human annotators.
5 ANALYSIS AND DISCUSSIONS
5.1 ABLATION OF DESIGN DECISIONS Methods Executability LCS
Translated Codex 12B 78.57% 24.72% - w/o Action Translation 31.49% 22.53% - w/o Dynamic Example 50.86% 22.84% - w/o Iterative 55.19% 24.43%
Table 2: Ablation of three proposed techniques.
We perform ablation studies to show the effectiveness and necessity of three components of our proposed procedure, each described Sec 3.2, 3.3, and 3.4. As shown in Table 2, leaving out any of the three components would all
lead to decreased performance in both executability and LCS. Notably, not doing action translation leads to the most significant executability drop, showing the importance of action translation in extracting executable action plans from LLMs.
5.2 CAN LLMS GENERATE ACTIONABLE PROGRAMS BY FOLLOWING DETAILED INSTRUCTIONS?
Prior works often focus on translating step-by-step instructions into executable programs. We evaluate LLMs under this setting using a prompt format shown in Appendix 8.2. Although this setting is easier as it does not require rich actionable knowledge, detailed instructions can help resolve much ambiguity of exactly how to perform a high-level task when multiple solutions are possible. Therefore, Translated Codex 12B achieves executability of 78.57% and LCS of 32.87%, where LCS sees a considerable bump from the setting without detailed instructions. Surprisingly, the LCS result is very close to that of a supervised LSTM (Hochreiter & Schmidhuber, 1997) baseline from VirtualHome trained on human-annotated data, which is at 34.00%. Note that since code to train the baseline and the specific train/test split is not publicly released, we only show results reported in Puig et al. (2018) as a reference. We also cannot compare executability as it is not reported.
5.3 IS ACTION TRANSLATION NECESSARY FOR ACTIONABLE KNOWLEDGE GROUNDING?
The investigations of this paper are two-fold: 1) Is actionable knowledge present in LLMs? 2) Can we ground this actionable knowledge in interactive environment? In this section, we focus our attention on the second question by conditioning on the assumption that first question is true. To do this, since successful execution of correct action plans directly measures grounding, we select only the correct plans generated by LLMs and measure how executable they are. We deem an action plan to be correct if 70% or more human annotators decide it is correct.
As shown by Table 3, when an LM is not large enough (e.g. GPT-2), not only it contains little actionable knowledge, but this knowledge cannot be grounded at all. GPT-3 and Codex, on the other hand, can generate highly correct action plans in free-form language. However, they do not have the capability to ground their actionable knowledge in interactive environments. What’s more interesting, by comparing GPT-3 of both 12B parameters and 175B parameters, ratio of executable plans does not improve with the parameter count. This shows that simply training larger models does not necessarily lead to better knowledge grounding. In the meantime, action translation offers a promising way towards grounding actionable knowledge by producing highly executable plans. However, we again note that it comes at a trade-off of producing less correct plans as compared to its vanilla counterpart, and we hope to see future endeavors for bridging the gap.
6 RELATED WORKS
Large-scale natural language modeling has witnessed rapid advances since the inception of the Transformer architecture (Vaswani et al., 2017). It has been shown by recent works that large language models (LLMs) pre-trained on large unstructured text corpus not only can perform strongly on various down-stream NLP tasks (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) but also can internalize an implicit knowledge base containing rich information about the world (Petroni et al., 2019; Jiang et al., 2020; Davison et al., 2019; Talmor et al., 2020; Roberts et al., 2020). Furthermore, the learned representations can be used to model relations of entities (Li et al., 2021), retrieve matching visual features (Ilharco et al., 2020), and even as valuable priors when applied to diverse tasks from different modalities (Lu et al., 2021; Tsimpoukelli et al., 2021). Compared to prior works in knowledge extraction that extract single-step factual answers memorized by the models (e.g. “Dante was born in [PLACE]”), we aim to extract sequential action plans to complete an open-ended human activity (e.g. “make breakfast”). We further require these plans to only contain allowed actions and satisfy the pre/post-conditions of actions in order to be executed by an embodied agent.
At the same time, there has also been growing interest and development in grounding language in embodied environment. A series of prior works have investigated the possibility of parsing language instructions into formal logic to resolve various linguistic ambiguities for embodied agents (Artzi & Zettlemoyer, 2013; Misra et al., 2015; Tenorth et al., 2010). However, they often scale poorly to complex tasks and environments. Recently, more research efforts have been put into creating better and more realistic environments with the goal to further advances in this area (Puig et al., 2018; Shridhar et al., 2020a;b; Kolve et al., 2017; Savva et al., 2019). At the same time, by leveraging the better representation power of neural architectures, a number of works have looked into creating instruction-following agents that can perform manipulation (Lynch & Sermanet, 2020), navigation (Majumdar et al., 2020), or both (Suglia et al., 2021; Hill et al., 2020).
Notably, most of these prior works do not leverage full-blown pre-trained LLMs (Suglia et al., 2021) or do not scale to complex human activities (Hill et al., 2020; Lynch & Sermanet, 2020). Perhaps more importantly, few works have evaluated LLMs in an embodiment setting that realizes the full potential of the world knowledge these models contain: the tasks evaluated are often “pick”, “grab”, “open”, and etc, which do not resemble the highly diverse activities that humans perform in daily lives. The development of VirtualHome environment Puig et al. (2018) enables such possibility. However, relevant works (Puig et al., 2020; Liao et al., 2019) rely on human-annotated data and perform supervised training from scratch. Due to the lack of rich world knowledge, these models can only generate action plans given step-by-step instructions of how to act or video demonstrations. In this work, we take a step further by conditioning only on the high-level descriptions and by extracting executable action plans from LLMs without any additional training.
7 LIMITATIONS AND CONCLUSION
There are several notable limitations of this work. First, although our approach presents a viable way to ground world knowledge in embodied environments, it is still a trade-off rather than one best solution since we observe considerable drop in correctness. Second, we focus on high-level to midlevel grounding, assuming there is a controller that can execute mid-level tasks (such as “grab cup”). Our work does not investigate usefulness of LLMs for low-level sensorimotor behavior grounding. The third limitation is that we do not incorporate observation context or feedback into our models. To some extent, we approach LLMs in the same way as how VirtualHome asks human annotators to give action plans for a given huamn activity by imagination, in which case the human-generated action plans also do not incorporate observation context. However, we do see incorporating observation context for complex activities as an exciting future direction.
8 APPENDIX
8.1 EXAMPLE PROGRAM IN VIRTUALHOME
8.2 EXAMPLE PROMPT CONTAINING STEP-BY-STEP INSTRUCTIONS
8.3 HYPERPARAMETER SEARCH
For each evaluated method, we perform grid search over the following hyperparameters:
Name Description Search Values epsilon ( ) OOD cutoff threshold used in iterative action
translation {0, 0.4, 0.8}
temperature sampling parameter adjusting relative probabilities across tokens {0.1, 0.3, 0.6}
k number of samples generated when querying LMs each time {1, 10}
frequence penalty OpenAI API specific; penalize new tokens based on their existing frequency in the text so far {0.1, 0.3, 0.6, 0.9}
presence penalty OpenAI API specific; penalize new tokens based on whether they appear in the text so far {0.3, 0.5, 0.8}
repetition penalty Hugging Face Transformers specific; penalize new tokens based on whether they are repeating existing text {1.0, 1.2, 1.5, 1.8}
For methods that use fixed example across evaluated tasks, we search over the following three randomly chosen examples:
Example 1 Example 2 Example 3 Task: Use computer Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Relax on sofa Step 1: Walk to home office Step 2: Walk to couch Step 3: Find couch Step 4: Sit on couch Step 5: Find pillow Step 6: Lie on couch Task: Read book Step 1: Walk to home office Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find chair Step 6: Sit on chair Step 7: Read novel
8.4 ANALYSIS OF PROGRAM LENGTH
Shorter programs have a natural advantage of being more executable. Consider a task “wash hands” and a corresponding program that only commands an agent to “go to bathroom” without additional steps. The program is obviously incorrect yet trivially executable. To validate our approach does not simply generate very short program, we calculate the average program length across the 88 evaluated tasks. Results are shown in Table 6. In addition to the failure mode discussed in Section 4.2 that leads to incorrect yet executable programs, smaller LMs such as GPT-2 also generate programs significantly shorter than larger models, making them more executable. In contrast, larger models like Codex-12B generate more expressive program of high correctness, but they often suffer from executability. We show action translation can lead to benefits of both worlds, generating programs that are highly executable while maintaining similar expressiveness in terms of program length.
8.5 DETAILS OF HUMAN EVALUATIONS
Human evaluations are conducted on Amazon Mechanical Turk. For each method, we generate action plans for all 88 high-level tasks. To account for expressivity of the VirtualHome environment (Puig et al., 2018), we further include ground-truth action plans from the VirtualHome dataset in our human evaluations. The human evaluations are conducted in the form of questionnaires containing all action plans with unknown corresponding methods. The questionnaire contains the following instructions at the top:
For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps. In other words, can the task be decomposed into these steps? Note that simply re-stating the task does not mean completing it.
Human annotators are required to answer all the questions in the questionnaire, where each question is an action plan generated by a method unknown to the annotator. The order of the questions is randomly permuted before presented to each annotator. For each question, the annotators need to answer either “Yes” or “No” indicating if they believe the action plan completes the task. For each method, we report correctness percentage averaged across 10 participated human annotators.
8.6 DETAILS OF VIRTUALHOME TASKS
VirtualHome ActivityPrograms is a knowledge base collected on Mechanical Turk by Puig et al. (2018). The knowledge base features real household activities of humans and correspondingly sequences of basic actions for robots to perform these activities. Each entry contains a high-level task name (i.e. “Watch TV”), a mid-level detailed description (i.e. “Sit on my couch directly opposite my TV, switch on my TV with the remote control and watch”), and a sequence of allowed actions (a.k.a. a program) to perform the task in VirtualHome. There are 292 distinct high-level tasks and a total of 2821 different entries in the knowledge base, where each high-level task may correspond one or several programs. The knowledge base covers a diverse set of real human activities and a wide range of different actions and objects. Puig et al. (2018) also conduct additional human evaluation on 100 randomly sampled programs and show that 64% of them are complete (i.e. contain all necessary steps). Readers are encouraged to refer to Puig et al. (2018) for more comprehensive analysis of the dataset.
Train/Test Split From all 292 distinct high-level tasks from the ActivityPrograms knowledge base, we randomly sample 88 high-level tasks for evaluation. The remaining 204 tasks are used as demonstration set from which we are allowed to select as example(s) for prompting language models, or in the case of supervised fine-tuning, they are used to fine-tune pre-trained language models.
8.7 RANDOM SAMPLES OF GT, VANILLA CODEX, AND TRANSLATED CODEX
Annotated Ground-Truth Codex Translated Codex (Ours) Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Turn to bed Step 5: Look at bed Step 6: Sit on bed Step 7: Sleep Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find pillow Step 6: Sleep on pillow Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Lie on bed Step 4: Sleep
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer
Task: Listen to music Step 1: Walk to bedroom Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Pull chair Step 9: Sit on chair Step 10: Find headset Step 11: Grab headset Step 12: Put on headset Step 13: Switch on computer Step 14: Find keyboard Step 15: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to computer Step 3: Sit on chair Step 4: Switch on computer Step 5: Find music stand
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to tooth paste Step 3: Find tooth paste Step 4: Grab tooth paste Step 5: Find toothbrush Step 6: Grab toothbrush
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to sink Step 3: Find toothbrush Step 4: Pick up toothbrush Step 5: Put toothbrush in mouth Step 6: Move brush around mouth for two minutes Step 7: Spit out toothpaste and brush into sink Step 8: Turn on water in sink and rinse brush for one minute Step 9: Turn off water in sink and return brush to cupboard Task: Brush teeth Step 1: Walk to bathroom Step 2: Open door Step 3: Walk to sink Step 4: Put pot on sink Step 5: Put brush on toothbrush Step 6: Turn to toothpaste Step 7: Put toothpaste on toothbrush Step 8: Put teeth on toothbrush
Task: Dry soap bottles Step 1: Find towel Step 2: Find soap Step 3: Grab soap Step 4: Wipe soap Step 5: Put back soap Step 6: Find shampoo Step 7: Grab shampoo Step 8: Wipe shampoo Step 9: Put back shampoo Step 10: Find shampoo Step 11: Grab shampoo Step 12: Wipe shampoo Step 13: Put back shampoo
Task: Dry soap bottles Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on water tap Step 5: Wait for water to be hot enough Step 6: Fill container with hot water Step 7: Find soap bottles Step 8: Pour soap into container with hot water Step 9: Wait for soap to dissolve in water Step 10: Turn off tap and empty container into sink Task: Dry soap bottles Step 1: Walk to bathroom Step 2: Walk to shower Step 3: Find soap
Task: Vacuum Step 1: Walk to home office Step 2: Walk to computer Step 3: Find vacuum cleaner Step 4: Switch on vacuum cleaner Step 5: Switch off vacuum cleaner Task: Vacuum Step 1: Walk to hallway Step 2: Walk to closet Step 3: Find closet Step 4: Open closet Step 5: Find vacuum cleaner Step 6: Turn on vacuum cleaner Step 7: Walk to living room Step 8: Vacuum floor in living room Task: Vacuum Step 1: Walk to home office Step 2: Walk to dresser Step 3: Open dresser Step 4: Find vacuum cleaner Step 5: Grab vacuum cleaner Step 6: Pull vacuum cleaner Step 7: Plug in vacuum cleaner Step 8: Switch on vacuum cleaner Step 9: Pull vacuum cleaner Step 10: Push vacuum cleaner
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to bookshelf Step 3: Turn to bookshelf Step 4: Look at bookshelf Step 5: Turn to bookshelf Step 6: Look at bookshelf Step 7: Turn to bookshelf Step 8: Look at bookshelf Step 9: Turn to bookshelf Step 10: Look at bookshelf Step 11: Turn to bookshelf Step 12: Look at bookshelf Step 13: Turn to bookshelf Step 14: Look at bookshelf Step 15: Find novel Step 16: Grab novel
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find dictionary Step 6: Open dictionary Step 7: Look at dictionary
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to book Step 3: Find textbook
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Switch off computer Step 7: Find brush Step 8: Turn to brush Step 9: Look at brush Step 10: Grab brush Step 11: Wipe computer Step 12: Drop brush Step 13: Find napkin Step 14: Grab napkin Step 15: Wash napkin Step 16: Wipe computer
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Wash monitor Step 1: Walk to check Step 2: Walk to desk
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Open newspaper Step 8: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Open newspaper Step 7: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Read newspaper
Task: Apply lotion Step 1: Find bed Step 2: Turn to bed Step 3: Point at bed Step 4: Wash bed Step 5: Turn to bed Step 6: Look at bed Step 7: Push bed Step 8: Sit on bed
Task: Apply lotion Step 1: Walk to bathroom Step 2: Walk to cabinet Step 3: Open cabinet Step 4: Find lotion Step 5: Apply lotion on hands Step 6: Rub hands together for 10 seconds Step 7: Apply lotion on face Task: Apply lotion Step 1: Walk to bathroom Step 2: Turn to lotionbottle Step 3: Grab lotionbottle
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find desk Step 6: Open desk Step 7: Find check Step 8: Grab check Step 9: Put check on desk Step 10: Find pencil Step 11: Grab pencil Step 12: Close desk
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Draw picture Step 1: Walk to home office Step 2: Walk to wall Step 3: Walk to drawing Step 4: Find drawing Step 5: Turn to drawing Step 6: Look at drawing Step 7: Touch drawing Step 8: Pull drawing Step 9: Push drawing
Task: Prepare pot of boiling water Step 1: Walk to dining room Step 2: Walk to pot Step 3: Find pot Step 4: Grab pot Step 5: Walk to sink Step 6: Find faucet Step 7: Switch on faucet Step 8: Put pot on sink Step 9: Grab pot Step 10: Switch off faucet Step 11: Grab pot Step 12: Walk to oven Step 13: Put pot on oven Step 14: Switch on oven Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on tap Step 5: Fill kettle with water Step 6: Put kettle on stove Step 7: Turn on stove Step 8: Wait for water to boil Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to cabinet Step 3: Find pot Step 4: Grab pot Step 5: Walk to stove Step 6: Find stove Step 7: Switch on stove
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Look at mirror
Task: Look at mirror Step 1: Walk to bedroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Wipe down sink Step 1: Walk to dining room Step 2: Walk to rag Step 3: Find rag Step 4: Grab rag Step 5: Walk to sink Step 6: Wipe sink Step 7: Put back rag
Task: Wipe down sink Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Wipe down sink Step 5: Return to kitchen
Task: Wipe down sink Step 1: Walk to bathroom Step 2: Walk to cleaning solution Step 3: Find cleaning solution Step 4: Grab cleaning solution Step 5: Find sink Step 6: Pour cleaning solution into glasses Step 7: Find faucet Step 8: Switch on faucet Step 9: Scrub sink Step 10: Wash sink
Task: Complete surveys on amazon turk Step 1: Walk to chair Step 2: Sit on chair Step 3: Find computer Step 4: Switch on computer Step 5: Turn to computer Step 6: Look at computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer Step 11: Find check Step 12: Grab check Step 13: Read check Step 14: Put back mouse Step 15: Find keyboard Step 16: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer
Task: Write book Step 1: Walk to home office Step 2: Walk to laptop Step 3: Find laptop Step 4: Grab laptop Step 5: Find electrical outlet Step 6: Plug in laptop Step 7: Walk to love seat Step 8: Sit on love seat Step 9: Switch on laptop Step 10: Find keyboard Step 11: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to light Step 3: Find light Step 4: Switch on light Step 5: Find pen Step 6: Grab pen Step 7: Find chair Step 8: Sit on chair Step 9: Turn to novel
Task: Paint ceiling Step 1: Find drawing Step 2: Turn to drawing Step 3: Point at drawing Step 4: Watch drawing Step 5: Touch drawing Step 6: Walk to bedroom
Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find paint can Step 6: Open paint can Step 7: Pour paint into paint can Step 8: Close paint can Step 9: Find roller Step 10: Roll paint onto ceiling Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to ceiling Step 3: Walk to painting Step 4: Find painting Step 5: Turn to painting Step 6: Look at painting Step 7: Touch painting Step 8: Pull painting Step 9: Push painting
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Open dresser Step 4: Find dress Step 5: Grab dress Step 6: Put on dress Step 7: Close dresser
Task: Change clothes Step 1: Walk to closet Step 2: Open closet Step 3: Find clothes Step 4: Put on clothes Step 5: Close closet
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find dresser Step 4: Open dresser Step 5: Find pants Step 6: Grab pants Step 7: Put on pants Step 8: Find shirt Step 9: Grab shirt Step 10: Put on shirt Step 11: Close dresser
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find sheets Step 4: Grab sheets Step 5: Pull sheets
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find sheet Step 6: Pick up sheet Step 7: Find pillow Step 8: Pick up pillow Step 9: Find blanket Step 10: Pick up blanket
Task: Make bed Step 1: Stand up Step 2: Walk to bedroom Step 3: Walk to bed Step 4: Turn to bed Step 5: Put sheets on bed
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find bed Step 6: Sit on bed Step 7: Find bookmark Step 8: Grab bookmark Step 9: Read novel Step 10: Put bookmark on novel Step 11: Sleep Task: Read yourself to sleep Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to book Step 3: Sit on chair Step 4: Read check Step 5: Sleep
Task: Hang keys Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find purse Step 4: Grab purse Step 5: Open purse Step 6: Find keys Step 7: Grab keys Step 8: Find bowl Step 9: Put keys on bowl Step 10: Close purse Step 11: Put back purse
Task: Hang keys Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Hang keys Step 1: Walk to home office Step 2: Walk to table Step 3: Find keys Step 4: Grab keys Step 5: Walk to wall Step 6: Find hanger Step 7: Put keys on hanger
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to freezer Step 3: Open freezer Step 4: Find cheese Step 5: Grab cheese Step 6: Eat cheese
Task: Eat cheese Step 1: Walk to fridge Step 2: Open fridge Step 3: Find cheese Step 4: Pick up cheese Step 5: Walk to table Step 6: Place cheese on table Step 7: Close fridge Step 8: Walk to chair Step 9: Sit on chair Step 10: Eat cheese
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to table Step 3: Find table Step 4: Turn to table Step 5: Find chair Step 6: Sit on chair Step 7: Find food Step 8: Grab food Step 9: Find plate Step 10: Put food on plate | 1. What is the focus of the paper regarding pre-trained language models?
2. What are the strengths of the paper, particularly in its approach and ease of understanding?
3. What are the weaknesses of the paper regarding its evaluation and experimental design?
4. Do you have any concerns about the data used in the study, such as train and test splits and task similarity?
5. How does the reviewer assess the metrics used in the paper to evaluate model performance?
6. Are there any suggestions for improving the proposed approach and providing more meaningful results?
7. Are there any questions regarding the annotation process for model-generated plans?
8. Can the authors provide a summary of the tasks used in the VirtualHome benchmark?
9. Are there any recommendations for adding more technical novelty to the paper?
10. Are there any requests for additional analysis or reporting of model prediction errors? | Summary Of The Paper
Review | Summary Of The Paper
This paper explores the ability of pre-trained language models to generate plans or action sequences from a text instruction. The in-context learning ability of language models is used where the model is prompted with an example instruction and corresponding action sequence and the query instruction. Since text sequences generated by the language model may not be directly usable in the agent environment, the closest valid text actions are identified using a retrieval approach. Experiments compare performance of different language models on tasks from the VirtualHome benchmark.
Review
After rebuttal: I have read the author response. Some of my comments regarding the dataset and evaluation were partially addressed, so I am raising my score. However, I feel ambivalent about the paper due to concerns about the evaluation (See detailed comments in discussion thread).
====
Pros
Using language models to infer actionable plans from text instructions is interesting
Paper is generally easy to follow
Cons
The paper doesn’t provide sufficient details about the data, the train and test splits, and the challenging aspects of the task.
The pipelined approach considered here seems fairly limited
Limited technical novelty
Weak experiments
The paper should provide more details about the data. For instance, how similar are the train and test tasks? I would encourage the authors to provide a summary of the tasks. How was the 204/88 task split decided?
The metrics considered in the paper are less ideal and do not provide a holistic view of model performance. For instance,
Executability doesn’t consider whether predicted actions are relevant to the task
LCS penalizes actions that don’t appear in ground truth
Correctness: Why is the correctness of GT plans only 55%? This makes it very hard to interpret these results and I am not sure how meaningful the results are. Furthermore, it’s hard to say if the Translated variants in Table 1 are better than the plain language models. While these models are better in terms of executability and LCS, they are inferior in terms of correctness.
How were the human annotators instructed to label model generated plans?
Is it possible to define a straightforward metric that identifies whether a given task was successfully completed or not?
The proposed approach should be compared against other baselines. For instance, a fine-tuning baseline can be considered where the model translates instructions to plans.
I would encourage the authors to analyze and report model prediction errors.
Links to appendix are broken. |
ICLR | Title
Language Models as Zero-Shot Planners: Extracting Actionable Knowledge for Embodied Agents
Abstract
Can world knowledge learned by large language models (LLMs) be used to act in interactive environments? In this paper, we investigate the possibility of grounding high-level tasks, expressed in natural language (i.e. “make breakfast”), to a chosen set of actionable steps (i.e. “open fridge”). While prior work focused on learning from explicit step-by-step examples of how to act, we surprisingly find that if pre-trained LMs are large enough and prompted appropriately, they can effectively decompose high-level tasks into low-level plans without any further training. However, the plans produced naively by LLMs often cannot map precisely to admissible actions. We propose a procedure that conditions on existing demonstrations and semantically translates the plans to admissible actions. Our evaluation in the recent VirtualHome environment shows that the resulting method substantially improves executability over the LLM baseline. The conducted human evaluation reveals a trade-off between executability and correctness but shows a promising sign towards extracting actionable knowledge from language models1.
1 INTRODUCTION
Large language models (LLMs) have made impressive advances in language generation and understanding in recent years (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020). See Bommasani et al. (2021) for a recent summary of their capabilities and impacts. Being trained on large corpora of human-produced language, these models are thought to contain a lot of information about the world (Roberts et al., 2020; Li et al., 2021; BIG-bench collaboration, 2021) - albeit in linguistic form.
We ask whether we can use such knowledge contained in LLMs not just for linguistic tasks, but to make goal-driven decisions to that can be enacted in interactive, embodied environments. But we are not simply interested in whether we can train models on a dataset of demonstrations collected for some specific environment – we are instead interested in whether LLMs already contain information necessary to accomplish goals without any additional training.
More specifically, we ask whether world knowledge about how to perform high-level tasks (such as “make breakfast”) can be expanded to a series of groundable actions (such as “open fridge”, “grab milk”, “close fridge”, etc) that can be executed in the environment. For our investigation, we use recently proposed VirtualHome environment (Puig et al., 2018). It can simulate a large variety of realistic human activities in a household environment and supports ability to perform them via embodied actions defined with a verb-object syntax. However, due to open-ended nature of the tasks, it is difficult to autonomously evaluate their success. We rely on human evaluation (conducted on Mechanical Turk) to decide whether sequences of actions meaningfully accomplish posed tasks.
We find that large GPT-3 (Brown et al., 2020) and Codex (Chen et al., 2021) models, when prompted with a single fixed example of a task description and its associated sequence of actions, can produce very plausible action plans for the task we’re interested in. Such completions reflect the information already stored in the model – no model fine-tuning is involved. Additionally, we only observe this
1Results and videos at https://sites.google.com/view/language-model-as-planner
effect in the larger models. Unfortunately, despite their semantic correctness, the produced action plans are often not executable in the environment. Produced actions may not map precisely to admissible actions, or may contain various linguistic ambiguities.
We propose several tools to improve executability of the model’s outputs. First, we enumerate all admissible action phrases and map the model’s output action to the most semantically-similar admissible action (we use similarity measure between sentence embeddings produced by a RoBERTa model Liu et al. (2019) in this work, but other choices are possible). Second, we use the model to autoregressively generate actions in a plan by conditioning past actions that have been made admissible via the technique above. Such on the fly correction can keep generation anchored to admissible actions. Third, we provide weak supervision to the model by prompting the model with a known task example similar to the query task. This is somewhat reminiscent of prompt tuning approaches, but does not require access to gradients or internals of the model.
Using above tools to bias model generation, we find that we improve executability of instructions from 18% to 79% (see Figure 1) without any invasive modifications to model parameters or any extra gradient or internal information beyond what is returned from the model’s forward pass. This is advantageous because it does not require any modifications to model training procedure and can fit within existing model serving pipelines. However, we do find there to be a significant drop in correctness of the instruction sequences generated with above tools (as judged by humans), indicating a promising step, but requiring more research on the topic.
To summarize, our paper’s contributions are as follows:
• We show that without any training, large language models can be prompted to generate plausible goal-driven action plans, but such plans are frequently not executable in interactive environments.
• We propose several tools to improve executability of the model generation without invasive probing or modifications to the model.
• We conduct a human evaluation of multiple techniques and models and report on the tradeoffs between executabiltiy and semantic correctness.
2 EVALUATION FRAMEWORK
Simulating open-ended tasks that resemble naturalistic human activities requires an environment to support a rich set of diverse interactions, rendering most existing embodied environments unsuitable for our investigation. One exception is VirtualHome (Puig et al., 2018), which models human activities in a typical household. Therefore, we only provide evaluation in this environment. To further measure correctness given open-ended tasks, we conduct a human evaluation. We note that since no further training is involved throughout our investigations, the observations and findings presented in this paper should also translate to similar embodied environments.
2.1 EVALUATED ENVIRONMENT: VIRTUALHOME
Preliminaries In VirtualHome, activities are expressed as programs. Each program consists of a sequence of steps, where each step is written as: [action] 〈arg1〉(id1) ... 〈argn〉(idn). Each action refers to atomic actions such as “walk”, “open”, and “put”. A total of 45 atomic actions are supported by VirtualHome. Different actions take in different numbers of arg necessary for specifying an interaction. Associated with each arg is a unique id specifying the corresponding node in the environment graph, in case of multiple instances of the same object class are present in the graph. For the sake of simplicity, we omit the id in the remaining discussions of this paper and allow automatic assignment by the environment. An example program is shown in Appendix 4.
Evaluated Tasks We use the knowledge base collected by VirtualHome for evaluation. The knowledge base contains household activities crowd-sourced from Amazon Mechanical Turk (MTurk). The MTurk workers were asked to provide natural language descriptions of daily household activities and all actionable steps necessary for completing the activities. The descriptions are both given as high-level task descriptions and step-by-step instructions. We omit the use of stepby-step instructions in this work as we desire direct extraction of executable programs from only task descriptions. For evaluations, we randomly sample a subset of 88 high-level tasks, each having one or more annotated ground-truth programs. The remaining 204 tasks are used as demonstration set, from which we are allowed to select as example(s) for prompting language models. Note that no training or fine-tuning is performed using these tasks and their annotations. More details of the evaluated tasks can be found in Appendix 8.6.
2.2 METRICS
A program that commands the agent to wander around in a household environment is highly executable but may not complete the desired task. On the other hand, a program composed of step instructions from knowledge bases can likely complete the task but cannot be executed. The reason is that free-form instructions can be ambiguous and may lack necessary common-sense actions. To this end, we consider two axes for evaluation: executability and correctness.
Executability Executability measures whether an action plan can be correctly parsed and satisfies the common-sense constraints of the environment. To be correctly parsed, an action plan must be syntatically correct and contain only allowed actions and recognizable objects. To satisfy the common-sense constraints, each action step must not violate the set of its pre-conditions (e.g. the agent cannot grab milk from the fridge before opening it) and post-conditions (e.g. the state of the fridge changes from “closed” to “open” after the agent opens it). We report the average executability across all 88 tasks and across all 7 VirtualHome scenes.
Correctness Unlike most embodied environments where the completion of a task can be easily judged, the ambiguous and multimodal nature of natural language task specification makes it impractical to obtain a gold-standard measurement of correctness. One approach could be measuring similarity of the final environment state produced by executing predicted and ground-truth programs, but VirtualHome initializes an environment differently based on the to-be-executed program, making comparisons difficult if measured in such way. Therefore, we conduct human evaluation for the highlighted methods. More details of the human evaluations can be found in Appendix 8.5. For the remaining methods and ablation studies, we rely on a match-based metric that measures how similar a generated program is to human annotations. Specifically, we follow Puig et al. (2018) and calculate the longest common subsequence (LCS) between two programs, normalized by the maximum length of the two. In the presence of multiple ground-truth programs for a single task, we take the maximum LCS across the ground-truth programs. However, we note that the majority of the tasks only have one ground-truth annotation, but there are often many plausible ways to complete a certain task, making this metric imperfect at evaluation program correctness2. Although correlation between the two is shown by Puig et al. (2018), we consider it only as a proxy metric in replacement of unscalable human evaluation.
2Although LCS has a mathematical range of [0, 1], we measure the LCS between different ground-truth programs for the same task and find an empirical maximum of 0.489.
Task: Shave Step 1: Grab razor Step 2: Switch on razor Step 3: Put razor on face
Task: Apply lotion
Pre-Trained Causal LLM Frozen
Pre-Trained Masked LLM
Frozen
Step 1: Squeeze out a glob of lotion Step 1: Pour lotion into right hand
Step 1: Squeeze out a glob of lotion Task: ShaveStep 1: Grab razor Step 2: Wash razor Step 3: Switch on razor
Task: Apply lotion Step 1: Pour lotion into right hand Step 2:
Pre-Trained Causal LLM Frozen
Zero-Shot Planning via Causal LLM Translation to Admissible Action Step-By-Step Autoregressive Generation
Prompt Prompt
Figure 2: We investigate the possibility of extracting actionable knowledge from pre-trained language models without any additional training. We first show surprising finding that large language models (LLMs) can decompose high-level tasks into sensible low-level action plans (left). To make the action plans executable, we propose to translate each step into admissible action via another LLM (middle). The translated action is appended to the original prompt used for generating the remaining steps (right).
3 METHOD
In this section, we investigate the possibility of extracting actionable knowledge from pre-trained language models without further training. We first give an overview of the common approach to query large language models (LLMs) and how it may be used for embodied agents. Then we describe an inference-time procedure that addresses several deficiencies of the LLM baseline and offers better executability in embodied environments. We break down the proposed procedure into three individual components, each discussed in Sec 3.2, 3.3, and 3.4.
Since LMs excel at dealing with natural language text instead of the specific format required by VirtualHome as described in Section 2.1, we only expose natural language text to LMs. To do this, we define a mapping for each atomic action that parses a natural language phrase to the required format. For instance, “Walk to living room” is converted to “[Walk] 〈living room〉(1)”. When an LM output cannot be parsed to any of the allowed action, the entire program is considered syntactically incorrect and thus not executable.
3.1 PROMPTING
Previous works have shown that large language models pre-trained on a colossal amount of data contain useful world knowledge that can be probed to perform various down-stream tasks (Radford et al., 2019; Brown et al., 2020). Notably, autoregressive LLMs can even perform in-context learning, an approach to solve tasks using only contextual information without gradient updates (Brown et al., 2020). Contextual information is given as part of the input prompt and LMs are asked to complete the remaining text. It often consists of natural language instructions and/or a number of examples containing the desired input/output pairs.
We adopt the same approach to query LLMs to generate action plans for high-level tasks. Specifically, we prepend one example task description sentence and its annotated action plan from the demonstration set to the query task description, as shown in Fig 2. To obtain text completion results, we sample from autoregressive LLM using temperature sampling and nucleus sampling (Holtzman et al., 2019). We refer to this LM as Planning LM and the approach using this LM for plan generation as Vanilla [LM], where [LM] is replaced by specific language model such as GPT-3 or Codex.
To further improve the quality of the generated output, we follow Chen et al. (2021) that uses LMs for program synthesis to sample multiple output for each task. However, unlike prior works in program synthesis that choose the sample with highest unit test pass rate, we only consider the setting where one sample is allowed to be evaluated for each task. This is because repetitive trialand-errors can be dangerous in the real world, and executing many action plans is equivalent to probing the environment for privileged information, which is often considered not viable.
3.2 ROBUST PARSING BY SEMANTIC TRANSLATION
One issue arises when naively following the above approach to generate action plans for high-level tasks: the action plan is often not executable because LMs are allowed to generate free-form text.
Therefore, most of the time the output cannot be mapped to one unambiguous actionable step. And many reasons can cause such failures: 1) the output does not follow pre-defined mappings of any atomic action (i.e. “I first walk to the bedroom” does not follow “Walk to 〈PLACE〉”), 2) the output may refer to atomic action and objects using words unrecognizable by the environment (i.e. “Clean the dirty dishes in the sink” where “clean” and “dirty dishes in the sink” cannot be mapped to precise action and object), 3) the output contains lexical ambiguous words (i.e. “Open TV” should instead be “Switch on TV”), or 4) the output may use disallowed action (i.e. “Microwave the cup”).
Instead of developing a set of rules to transform the free-form text into admissible action steps, we propose to again leverage world knowledge learned by large language models to semantically translate the action. For each step in the action plan â, we aim to find the most similar admissible environment action ae as measured by cosine similarity:
argmax ae f(â) · f(ae) ‖f(â)‖‖f(ae)‖ where f is an embedding function.
To embed the output text and environment actions, we use a BERT-style LM (Devlin et al., 2018; Liu et al., 2019) trained with Sentence-BERT (Reimers & Gurevych, 2019) objective because of its suitability for sentence modeling. The sentence embedding is obtained by mean-pooling the last layer hidden states across all tokens. We refer to this LM as Translation LM. Note that this is a different LM than the GPT-style Planning LM discussed in the text so far. Using a single LM for both purposes could as well be possible and likely more efficient, but we leave such investigation to future works. While the set of actions in our environment is discrete and possible to exhaustively enumerate, sampling or projection can be employed in larger discrete or continuous action spaces.
Since Translation LM can guarantee the parsed action is allowed by the environment, we can tradeoff semantic soundness of an LM step by how likely it can be mapped to an admissible action in the environment. This can be achieved by a simple modification to the scheme that we use to choose the best sample from the LM output. Instead of only using mean token log probability as a ranking metric, we choose the sample with the highest score calculated as s = C + β · logprob, where C is the cosine similarity to the closest allowed action and β is a weighting parameter.
3.3 AUTOREGRESSIVE TRAJECTORY CORRECTION
Translating each step of the program after the entire program has been synthesized is analogous to open-loop planning and is subject to compounding errors. In practice, LLMs might output compounded instructions for a single step, even though it cannot be completed using one admissible action in the environment. To this end, we can instead interleave plan generation and action translation to allow for automatic trajectory correction. At each step, we first query Planning LM to generate k samples for a single action. Then we calculate score s for each sample using Translation LM and append the translated action to the unfinished text completion. This way all subsequent steps will be conditioned on admissible actions instead of free-form text output generated by Planning LM. Furthermore, we can use Translation LM to detect out-of-distribution actions, those outside the capabilities of a robot, and terminate a program early instead of mapping to a faulty action. This can be easily implemented by setting a threshold such that if Ctmax < at step t, the program is terminated early. We empirically show this leads to better executability while maintaining similar correctness of the generated action plans.
3.4 DYNAMIC EXAMPLE SELECTION FOR IMPROVED KNOWLEDGE EXTRACTION
So far in the text, we always give the same example in the prompt for all evaluated high-level tasks. However, consider the task of “ordering pizza”. Prompting LLMs with this task may give the assumption that the agent is initialized in front of a computer, and the LLMs may guide the agent to search for a pizza store and click “checkout my cart”. Although these are reasonable and feasible in the real world, such assumption cannot always be made as these interactions may not be supported in simulated environments like VirtualHome. In fact, the closest series of actions that human experts give may be “walking to a computer”, “switching on the computer”, and “typing the keyboard”. Without being finetuned on these data, LLMs would often fail at these tasks. To provide weak supervision at inference time, we propose to use Translation LM to select the most similar task from the demonstration set to be used as the example in the prompt. Specifically, we choose the task
whose high-level description matches closely the query task as measured by cosine similarity. This allows Planning LM to reason how to perform a similar task given a human-annotated example. In our evaluations, we make sure the demonstration set is not overlapping with our test queries.
Combining the various improvement discussed above, we refer to the final approach as Translated [LM], where [LM] is replaced by specific language model used such as GPT-3 and Codex.
4 RESULTS
In this section, we first show that language models can generate sensible action plans for many highlevel tasks, even without any additional training. Then we highlight its inadequacy when naively applied to embodied environments and demonstrate how this can be improved by again leveraging world knowledge learned by LLMs. Visualization of generated programs are shown in Fig 3.
Sampling from LMs Pre-trained LMs are sensitive to sampling parameters and the specific example given in the prompt. For all evaluated methods, we perform hyper-parameter search over various sampling parameters, and for methods using fixed prompt example, we report metrics averaged across three randomly chosen examples. To select the best run for each method, we rank the runs by LCS + executability, each normalized by human-expert scores3. Further details can be found in Appendix 8.3.
Model Choices We find empirically the combination of Codex-12B and Sentence-RoBERTa355M work well in our setting. Codex and GPT-3 are accessed using OpenAI API. The remaining models are accessed through open-source packages, Hugging Face Transformers (Wolf et al., 2019) and SentenceTransformers (Reimers & Gurevych, 2019), without additional modifications.
4.1 DO LLMS CONTAIN ACTIONABLE KNOWLEDGE FOR HIGH-LEVEL TASKS?
We first investigate whether LLMs can generate sensible action plans expressed in free-form language. We use the approach described in Section 3.1 to query pre-trained LLMs. To evaluate the correctness of generated action plans, we conduct human evaluations. For each model, we ask 10 human annotators to determine – by answering “Yes” or “No” – whether each task can be completed using provided action steps. To provide a reference of how humans might rate the action plans provided by other humans, we also ask annotators to rate the ground-truth action plans provided in the VirtualHome dataset for the same set of tasks. In contrast to the free-form text output by LLMs, the ground-truth action plans from VirtualHome are generated via a graphical programming interface that enforces strict syntax, although annotators were allowed to compose necessary actions.
We show the human evaluation results in Fig 1, where y-axis shows correctness averaged across all tasks and all annotators. Surprisingly, when LLMs are large enough and without imposed syntactic constraints, they can generate highly realistic action plans whose correctness – as deemed by human annotators – even surpasses human-labeled ground-truth. Yet another interesting finding is
3See footnote 2.
that Codex outperforms GPT-3 significantly under the same number of model parameters. One hypothesis could be that by fine-tuning on structured data (docstrings and code), Codex specializes at decomposing a high-level objective into a number of basic operations, even with those operations described in natural language. We also observe some level of correctness for smaller models such as GPT-2. However, inspection of its produced output indicates that it often generates significantly shorter plans by ignoring common-sense actions or by simply rephrasing the given task (e.g. the task “Go to sleep” produces only a single step “Go to bed”). These failure modes sometimes mislead human annotators to mark them correct as the annotators may ignore common-sense actions in their judgment as well, resulting in a higher correctness rate than the quality of the output shows.
4.2 HOW EXECUTABLE ARE THE LLM ACTION PLANS?
We analyze the executability of LLM plans by evaluating them in all 7 household scenes in VirtualHome. As shown in Table 1, we find action plans generated naively by LLMs are generally not very executable. Although smaller models seem to have higher executability, we find that the majority of these executable plans are produced by ignoring the queried task and repeating the given GT example of a different task. This is validated by the fact that smaller models have lower LCS than larger models despite having higher executability, showing that this failure mode is prevalent among smaller models. In contrast, larger models do not suffer severely from this failure mode. Yet as a result of being more expressive, their generated programs are substantially less executable.
4.3 CAN LLM ACTION PLANS BE MADE EXECUTABLE BY ACTION TRANSLATION?
In this section, we evaluate the effectiveness of our proposed procedure of action translation. We first create a bank of all allowed 47522 action steps in the environment, including all possible combinations of atomic actions and allowed arguments/objects. Then we use an off-the-shelf SentenceRoBERTa (Liu et al., 2019; Reimers & Gurevych, 2019) as Translation LM to create embeddings for actions and output text. For better computational efficiency, we pre-compute the embeddings for all allowed actions, leaving minor computation overhead for our procedure over the baseline methods at inference time. As shown in Table 1, executability of generated programs is significantly improved. Furthermore, we also observe improved LCS because the translated action steps precisely follow the program syntax and thus are more similar to ground-truth annotations. One sample output is shown in Fig 1 and a larger random subset of generated samples can be found in Appendix 8.7.
To validate their correctness, we again perform human studies using the same procedure in Sec 4.1. Results are shown in Table 1. We find that despite being more similar to GT, the programs are deemed less correct by humans. By examining the generated output, we observe two main sources of errors. First, we find Translation LM is poor at mapping compounded instructions to a succinct
admissible action. This is partly due to that Translation LM is trained on a much smaller dataset and contains much a smaller number of parameters, so we expect further improvement by using a larger pre-trained model for translation. The second source of error comes from imperfect expressivity of the environment; we find that for many tasks we evaluate, certain necessary actions or objects are not implemented. This is also reflected by out human evaluation results of the GT programs, as only half of the programs are considered complete by the human annotators.
5 ANALYSIS AND DISCUSSIONS
5.1 ABLATION OF DESIGN DECISIONS Methods Executability LCS
Translated Codex 12B 78.57% 24.72% - w/o Action Translation 31.49% 22.53% - w/o Dynamic Example 50.86% 22.84% - w/o Iterative 55.19% 24.43%
Table 2: Ablation of three proposed techniques.
We perform ablation studies to show the effectiveness and necessity of three components of our proposed procedure, each described Sec 3.2, 3.3, and 3.4. As shown in Table 2, leaving out any of the three components would all
lead to decreased performance in both executability and LCS. Notably, not doing action translation leads to the most significant executability drop, showing the importance of action translation in extracting executable action plans from LLMs.
5.2 CAN LLMS GENERATE ACTIONABLE PROGRAMS BY FOLLOWING DETAILED INSTRUCTIONS?
Prior works often focus on translating step-by-step instructions into executable programs. We evaluate LLMs under this setting using a prompt format shown in Appendix 8.2. Although this setting is easier as it does not require rich actionable knowledge, detailed instructions can help resolve much ambiguity of exactly how to perform a high-level task when multiple solutions are possible. Therefore, Translated Codex 12B achieves executability of 78.57% and LCS of 32.87%, where LCS sees a considerable bump from the setting without detailed instructions. Surprisingly, the LCS result is very close to that of a supervised LSTM (Hochreiter & Schmidhuber, 1997) baseline from VirtualHome trained on human-annotated data, which is at 34.00%. Note that since code to train the baseline and the specific train/test split is not publicly released, we only show results reported in Puig et al. (2018) as a reference. We also cannot compare executability as it is not reported.
5.3 IS ACTION TRANSLATION NECESSARY FOR ACTIONABLE KNOWLEDGE GROUNDING?
The investigations of this paper are two-fold: 1) Is actionable knowledge present in LLMs? 2) Can we ground this actionable knowledge in interactive environment? In this section, we focus our attention on the second question by conditioning on the assumption that first question is true. To do this, since successful execution of correct action plans directly measures grounding, we select only the correct plans generated by LLMs and measure how executable they are. We deem an action plan to be correct if 70% or more human annotators decide it is correct.
As shown by Table 3, when an LM is not large enough (e.g. GPT-2), not only it contains little actionable knowledge, but this knowledge cannot be grounded at all. GPT-3 and Codex, on the other hand, can generate highly correct action plans in free-form language. However, they do not have the capability to ground their actionable knowledge in interactive environments. What’s more interesting, by comparing GPT-3 of both 12B parameters and 175B parameters, ratio of executable plans does not improve with the parameter count. This shows that simply training larger models does not necessarily lead to better knowledge grounding. In the meantime, action translation offers a promising way towards grounding actionable knowledge by producing highly executable plans. However, we again note that it comes at a trade-off of producing less correct plans as compared to its vanilla counterpart, and we hope to see future endeavors for bridging the gap.
6 RELATED WORKS
Large-scale natural language modeling has witnessed rapid advances since the inception of the Transformer architecture (Vaswani et al., 2017). It has been shown by recent works that large language models (LLMs) pre-trained on large unstructured text corpus not only can perform strongly on various down-stream NLP tasks (Devlin et al., 2018; Radford et al., 2019; Raffel et al., 2019; Brown et al., 2020) but also can internalize an implicit knowledge base containing rich information about the world (Petroni et al., 2019; Jiang et al., 2020; Davison et al., 2019; Talmor et al., 2020; Roberts et al., 2020). Furthermore, the learned representations can be used to model relations of entities (Li et al., 2021), retrieve matching visual features (Ilharco et al., 2020), and even as valuable priors when applied to diverse tasks from different modalities (Lu et al., 2021; Tsimpoukelli et al., 2021). Compared to prior works in knowledge extraction that extract single-step factual answers memorized by the models (e.g. “Dante was born in [PLACE]”), we aim to extract sequential action plans to complete an open-ended human activity (e.g. “make breakfast”). We further require these plans to only contain allowed actions and satisfy the pre/post-conditions of actions in order to be executed by an embodied agent.
At the same time, there has also been growing interest and development in grounding language in embodied environment. A series of prior works have investigated the possibility of parsing language instructions into formal logic to resolve various linguistic ambiguities for embodied agents (Artzi & Zettlemoyer, 2013; Misra et al., 2015; Tenorth et al., 2010). However, they often scale poorly to complex tasks and environments. Recently, more research efforts have been put into creating better and more realistic environments with the goal to further advances in this area (Puig et al., 2018; Shridhar et al., 2020a;b; Kolve et al., 2017; Savva et al., 2019). At the same time, by leveraging the better representation power of neural architectures, a number of works have looked into creating instruction-following agents that can perform manipulation (Lynch & Sermanet, 2020), navigation (Majumdar et al., 2020), or both (Suglia et al., 2021; Hill et al., 2020).
Notably, most of these prior works do not leverage full-blown pre-trained LLMs (Suglia et al., 2021) or do not scale to complex human activities (Hill et al., 2020; Lynch & Sermanet, 2020). Perhaps more importantly, few works have evaluated LLMs in an embodiment setting that realizes the full potential of the world knowledge these models contain: the tasks evaluated are often “pick”, “grab”, “open”, and etc, which do not resemble the highly diverse activities that humans perform in daily lives. The development of VirtualHome environment Puig et al. (2018) enables such possibility. However, relevant works (Puig et al., 2020; Liao et al., 2019) rely on human-annotated data and perform supervised training from scratch. Due to the lack of rich world knowledge, these models can only generate action plans given step-by-step instructions of how to act or video demonstrations. In this work, we take a step further by conditioning only on the high-level descriptions and by extracting executable action plans from LLMs without any additional training.
7 LIMITATIONS AND CONCLUSION
There are several notable limitations of this work. First, although our approach presents a viable way to ground world knowledge in embodied environments, it is still a trade-off rather than one best solution since we observe considerable drop in correctness. Second, we focus on high-level to midlevel grounding, assuming there is a controller that can execute mid-level tasks (such as “grab cup”). Our work does not investigate usefulness of LLMs for low-level sensorimotor behavior grounding. The third limitation is that we do not incorporate observation context or feedback into our models. To some extent, we approach LLMs in the same way as how VirtualHome asks human annotators to give action plans for a given huamn activity by imagination, in which case the human-generated action plans also do not incorporate observation context. However, we do see incorporating observation context for complex activities as an exciting future direction.
8 APPENDIX
8.1 EXAMPLE PROGRAM IN VIRTUALHOME
8.2 EXAMPLE PROMPT CONTAINING STEP-BY-STEP INSTRUCTIONS
8.3 HYPERPARAMETER SEARCH
For each evaluated method, we perform grid search over the following hyperparameters:
Name Description Search Values epsilon ( ) OOD cutoff threshold used in iterative action
translation {0, 0.4, 0.8}
temperature sampling parameter adjusting relative probabilities across tokens {0.1, 0.3, 0.6}
k number of samples generated when querying LMs each time {1, 10}
frequence penalty OpenAI API specific; penalize new tokens based on their existing frequency in the text so far {0.1, 0.3, 0.6, 0.9}
presence penalty OpenAI API specific; penalize new tokens based on whether they appear in the text so far {0.3, 0.5, 0.8}
repetition penalty Hugging Face Transformers specific; penalize new tokens based on whether they are repeating existing text {1.0, 1.2, 1.5, 1.8}
For methods that use fixed example across evaluated tasks, we search over the following three randomly chosen examples:
Example 1 Example 2 Example 3 Task: Use computer Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Relax on sofa Step 1: Walk to home office Step 2: Walk to couch Step 3: Find couch Step 4: Sit on couch Step 5: Find pillow Step 6: Lie on couch Task: Read book Step 1: Walk to home office Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find chair Step 6: Sit on chair Step 7: Read novel
8.4 ANALYSIS OF PROGRAM LENGTH
Shorter programs have a natural advantage of being more executable. Consider a task “wash hands” and a corresponding program that only commands an agent to “go to bathroom” without additional steps. The program is obviously incorrect yet trivially executable. To validate our approach does not simply generate very short program, we calculate the average program length across the 88 evaluated tasks. Results are shown in Table 6. In addition to the failure mode discussed in Section 4.2 that leads to incorrect yet executable programs, smaller LMs such as GPT-2 also generate programs significantly shorter than larger models, making them more executable. In contrast, larger models like Codex-12B generate more expressive program of high correctness, but they often suffer from executability. We show action translation can lead to benefits of both worlds, generating programs that are highly executable while maintaining similar expressiveness in terms of program length.
8.5 DETAILS OF HUMAN EVALUATIONS
Human evaluations are conducted on Amazon Mechanical Turk. For each method, we generate action plans for all 88 high-level tasks. To account for expressivity of the VirtualHome environment (Puig et al., 2018), we further include ground-truth action plans from the VirtualHome dataset in our human evaluations. The human evaluations are conducted in the form of questionnaires containing all action plans with unknown corresponding methods. The questionnaire contains the following instructions at the top:
For every question below, determine whether the task can be completed in any reasonable scenario using the provided steps. In other words, can the task be decomposed into these steps? Note that simply re-stating the task does not mean completing it.
Human annotators are required to answer all the questions in the questionnaire, where each question is an action plan generated by a method unknown to the annotator. The order of the questions is randomly permuted before presented to each annotator. For each question, the annotators need to answer either “Yes” or “No” indicating if they believe the action plan completes the task. For each method, we report correctness percentage averaged across 10 participated human annotators.
8.6 DETAILS OF VIRTUALHOME TASKS
VirtualHome ActivityPrograms is a knowledge base collected on Mechanical Turk by Puig et al. (2018). The knowledge base features real household activities of humans and correspondingly sequences of basic actions for robots to perform these activities. Each entry contains a high-level task name (i.e. “Watch TV”), a mid-level detailed description (i.e. “Sit on my couch directly opposite my TV, switch on my TV with the remote control and watch”), and a sequence of allowed actions (a.k.a. a program) to perform the task in VirtualHome. There are 292 distinct high-level tasks and a total of 2821 different entries in the knowledge base, where each high-level task may correspond one or several programs. The knowledge base covers a diverse set of real human activities and a wide range of different actions and objects. Puig et al. (2018) also conduct additional human evaluation on 100 randomly sampled programs and show that 64% of them are complete (i.e. contain all necessary steps). Readers are encouraged to refer to Puig et al. (2018) for more comprehensive analysis of the dataset.
Train/Test Split From all 292 distinct high-level tasks from the ActivityPrograms knowledge base, we randomly sample 88 high-level tasks for evaluation. The remaining 204 tasks are used as demonstration set from which we are allowed to select as example(s) for prompting language models, or in the case of supervised fine-tuning, they are used to fine-tune pre-trained language models.
8.7 RANDOM SAMPLES OF GT, VANILLA CODEX, AND TRANSLATED CODEX
Annotated Ground-Truth Codex Translated Codex (Ours) Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Turn to bed Step 5: Look at bed Step 6: Sit on bed Step 7: Sleep Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find pillow Step 6: Sleep on pillow Task: Go to sleep Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Lie on bed Step 4: Sleep
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Browse internet Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer
Task: Listen to music Step 1: Walk to bedroom Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Walk to computer Step 7: Find chair Step 8: Pull chair Step 9: Sit on chair Step 10: Find headset Step 11: Grab headset Step 12: Put on headset Step 13: Switch on computer Step 14: Find keyboard Step 15: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Listen to music Step 1: Walk to home office Step 2: Walk to computer Step 3: Sit on chair Step 4: Switch on computer Step 5: Find music stand
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to tooth paste Step 3: Find tooth paste Step 4: Grab tooth paste Step 5: Find toothbrush Step 6: Grab toothbrush
Task: Brush teeth Step 1: Walk to bathroom Step 2: Walk to sink Step 3: Find toothbrush Step 4: Pick up toothbrush Step 5: Put toothbrush in mouth Step 6: Move brush around mouth for two minutes Step 7: Spit out toothpaste and brush into sink Step 8: Turn on water in sink and rinse brush for one minute Step 9: Turn off water in sink and return brush to cupboard Task: Brush teeth Step 1: Walk to bathroom Step 2: Open door Step 3: Walk to sink Step 4: Put pot on sink Step 5: Put brush on toothbrush Step 6: Turn to toothpaste Step 7: Put toothpaste on toothbrush Step 8: Put teeth on toothbrush
Task: Dry soap bottles Step 1: Find towel Step 2: Find soap Step 3: Grab soap Step 4: Wipe soap Step 5: Put back soap Step 6: Find shampoo Step 7: Grab shampoo Step 8: Wipe shampoo Step 9: Put back shampoo Step 10: Find shampoo Step 11: Grab shampoo Step 12: Wipe shampoo Step 13: Put back shampoo
Task: Dry soap bottles Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on water tap Step 5: Wait for water to be hot enough Step 6: Fill container with hot water Step 7: Find soap bottles Step 8: Pour soap into container with hot water Step 9: Wait for soap to dissolve in water Step 10: Turn off tap and empty container into sink Task: Dry soap bottles Step 1: Walk to bathroom Step 2: Walk to shower Step 3: Find soap
Task: Vacuum Step 1: Walk to home office Step 2: Walk to computer Step 3: Find vacuum cleaner Step 4: Switch on vacuum cleaner Step 5: Switch off vacuum cleaner Task: Vacuum Step 1: Walk to hallway Step 2: Walk to closet Step 3: Find closet Step 4: Open closet Step 5: Find vacuum cleaner Step 6: Turn on vacuum cleaner Step 7: Walk to living room Step 8: Vacuum floor in living room Task: Vacuum Step 1: Walk to home office Step 2: Walk to dresser Step 3: Open dresser Step 4: Find vacuum cleaner Step 5: Grab vacuum cleaner Step 6: Pull vacuum cleaner Step 7: Plug in vacuum cleaner Step 8: Switch on vacuum cleaner Step 9: Pull vacuum cleaner Step 10: Push vacuum cleaner
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to bookshelf Step 3: Turn to bookshelf Step 4: Look at bookshelf Step 5: Turn to bookshelf Step 6: Look at bookshelf Step 7: Turn to bookshelf Step 8: Look at bookshelf Step 9: Turn to bookshelf Step 10: Look at bookshelf Step 11: Turn to bookshelf Step 12: Look at bookshelf Step 13: Turn to bookshelf Step 14: Look at bookshelf Step 15: Find novel Step 16: Grab novel
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find dictionary Step 6: Open dictionary Step 7: Look at dictionary
Task: Find dictionary Step 1: Walk to home office Step 2: Walk to book Step 3: Find textbook
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to computer Step 3: Find computer Step 4: Turn to computer Step 5: Look at computer Step 6: Switch off computer Step 7: Find brush Step 8: Turn to brush Step 9: Look at brush Step 10: Grab brush Step 11: Wipe computer Step 12: Drop brush Step 13: Find napkin Step 14: Grab napkin Step 15: Wash napkin Step 16: Wipe computer
Task: Wash monitor Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Wash monitor Step 1: Walk to check Step 2: Walk to desk
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Open newspaper Step 8: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Open newspaper Step 7: Read newspaper
Task: Read newspaper Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find newspaper Step 6: Grab newspaper Step 7: Read newspaper
Task: Apply lotion Step 1: Find bed Step 2: Turn to bed Step 3: Point at bed Step 4: Wash bed Step 5: Turn to bed Step 6: Look at bed Step 7: Push bed Step 8: Sit on bed
Task: Apply lotion Step 1: Walk to bathroom Step 2: Walk to cabinet Step 3: Open cabinet Step 4: Find lotion Step 5: Apply lotion on hands Step 6: Rub hands together for 10 seconds Step 7: Apply lotion on face Task: Apply lotion Step 1: Walk to bathroom Step 2: Turn to lotionbottle Step 3: Grab lotionbottle
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find desk Step 6: Open desk Step 7: Find check Step 8: Grab check Step 9: Put check on desk Step 10: Find pencil Step 11: Grab pencil Step 12: Close desk
Task: Draw picture Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Draw picture Step 1: Walk to home office Step 2: Walk to wall Step 3: Walk to drawing Step 4: Find drawing Step 5: Turn to drawing Step 6: Look at drawing Step 7: Touch drawing Step 8: Pull drawing Step 9: Push drawing
Task: Prepare pot of boiling water Step 1: Walk to dining room Step 2: Walk to pot Step 3: Find pot Step 4: Grab pot Step 5: Walk to sink Step 6: Find faucet Step 7: Switch on faucet Step 8: Put pot on sink Step 9: Grab pot Step 10: Switch off faucet Step 11: Grab pot Step 12: Walk to oven Step 13: Put pot on oven Step 14: Switch on oven Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Turn on tap Step 5: Fill kettle with water Step 6: Put kettle on stove Step 7: Turn on stove Step 8: Wait for water to boil Task: Prepare pot of boiling water Step 1: Walk to kitchen Step 2: Walk to cabinet Step 3: Find pot Step 4: Grab pot Step 5: Walk to stove Step 6: Find stove Step 7: Switch on stove
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Look at mirror Step 1: Walk to bathroom Step 2: Walk to mirror Step 3: Look at mirror
Task: Look at mirror Step 1: Walk to bedroom Step 2: Walk to mirror Step 3: Find mirror Step 4: Turn to mirror Step 5: Look at mirror
Task: Wipe down sink Step 1: Walk to dining room Step 2: Walk to rag Step 3: Find rag Step 4: Grab rag Step 5: Walk to sink Step 6: Wipe sink Step 7: Put back rag
Task: Wipe down sink Step 1: Walk to kitchen Step 2: Walk to sink Step 3: Find sink Step 4: Wipe down sink Step 5: Return to kitchen
Task: Wipe down sink Step 1: Walk to bathroom Step 2: Walk to cleaning solution Step 3: Find cleaning solution Step 4: Grab cleaning solution Step 5: Find sink Step 6: Pour cleaning solution into glasses Step 7: Find faucet Step 8: Switch on faucet Step 9: Scrub sink Step 10: Wash sink
Task: Complete surveys on amazon turk Step 1: Walk to chair Step 2: Sit on chair Step 3: Find computer Step 4: Switch on computer Step 5: Turn to computer Step 6: Look at computer Step 7: Find mouse Step 8: Grab mouse Step 9: Turn to computer Step 10: Point at computer Step 11: Find check Step 12: Grab check Step 13: Read check Step 14: Put back mouse Step 15: Find keyboard Step 16: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard Task: Complete surveys on amazon turk Step 1: Walk to home office Step 2: Walk to desk Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer
Task: Write book Step 1: Walk to home office Step 2: Walk to laptop Step 3: Find laptop Step 4: Grab laptop Step 5: Find electrical outlet Step 6: Plug in laptop Step 7: Walk to love seat Step 8: Sit on love seat Step 9: Switch on laptop Step 10: Find keyboard Step 11: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Write book Step 1: Walk to home office Step 2: Walk to light Step 3: Find light Step 4: Switch on light Step 5: Find pen Step 6: Grab pen Step 7: Find chair Step 8: Sit on chair Step 9: Turn to novel
Task: Paint ceiling Step 1: Find drawing Step 2: Turn to drawing Step 3: Point at drawing Step 4: Watch drawing Step 5: Touch drawing Step 6: Walk to bedroom
Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find paint can Step 6: Open paint can Step 7: Pour paint into paint can Step 8: Close paint can Step 9: Find roller Step 10: Roll paint onto ceiling Task: Paint ceiling Step 1: Walk to home office Step 2: Walk to ceiling Step 3: Walk to painting Step 4: Find painting Step 5: Turn to painting Step 6: Look at painting Step 7: Touch painting Step 8: Pull painting Step 9: Push painting
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Open dresser Step 4: Find dress Step 5: Grab dress Step 6: Put on dress Step 7: Close dresser
Task: Change clothes Step 1: Walk to closet Step 2: Open closet Step 3: Find clothes Step 4: Put on clothes Step 5: Close closet
Task: Change clothes Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find dresser Step 4: Open dresser Step 5: Find pants Step 6: Grab pants Step 7: Put on pants Step 8: Find shirt Step 9: Grab shirt Step 10: Put on shirt Step 11: Close dresser
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find sheets Step 4: Grab sheets Step 5: Pull sheets
Task: Make bed Step 1: Walk to bedroom Step 2: Walk to bed Step 3: Find bed Step 4: Sit on bed Step 5: Find sheet Step 6: Pick up sheet Step 7: Find pillow Step 8: Pick up pillow Step 9: Find blanket Step 10: Pick up blanket
Task: Make bed Step 1: Stand up Step 2: Walk to bedroom Step 3: Walk to bed Step 4: Turn to bed Step 5: Put sheets on bed
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to novel Step 3: Find novel Step 4: Grab novel Step 5: Find bed Step 6: Sit on bed Step 7: Find bookmark Step 8: Grab bookmark Step 9: Read novel Step 10: Put bookmark on novel Step 11: Sleep Task: Read yourself to sleep Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Read yourself to sleep Step 1: Walk to bedroom Step 2: Walk to book Step 3: Sit on chair Step 4: Read check Step 5: Sleep
Task: Hang keys Step 1: Walk to bedroom Step 2: Walk to dresser Step 3: Find purse Step 4: Grab purse Step 5: Open purse Step 6: Find keys Step 7: Grab keys Step 8: Find bowl Step 9: Put keys on bowl Step 10: Close purse Step 11: Put back purse
Task: Hang keys Step 1: Walk to home office Step 2: Walk to chair Step 3: Find chair Step 4: Sit on chair Step 5: Find computer Step 6: Switch on computer Step 7: Turn to computer Step 8: Look at computer Step 9: Find keyboard Step 10: Type on keyboard
Task: Hang keys Step 1: Walk to home office Step 2: Walk to table Step 3: Find keys Step 4: Grab keys Step 5: Walk to wall Step 6: Find hanger Step 7: Put keys on hanger
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to freezer Step 3: Open freezer Step 4: Find cheese Step 5: Grab cheese Step 6: Eat cheese
Task: Eat cheese Step 1: Walk to fridge Step 2: Open fridge Step 3: Find cheese Step 4: Pick up cheese Step 5: Walk to table Step 6: Place cheese on table Step 7: Close fridge Step 8: Walk to chair Step 9: Sit on chair Step 10: Eat cheese
Task: Eat cheese Step 1: Walk to dining room Step 2: Walk to table Step 3: Find table Step 4: Turn to table Step 5: Find chair Step 6: Sit on chair Step 7: Find food Step 8: Grab food Step 9: Find plate Step 10: Put food on plate | 1. What is the main contribution of the paper regarding left-to-right language models?
2. What are the strengths and weaknesses of the proposed approach, particularly in its experimental setup and evaluation method?
3. Do you have any questions or concerns regarding the paper's analysis of the generated programs?
4. How does the reviewer assess the novelty and significance of the paper's idea and results?
5. Are there any suggestions for improving the paper, such as providing more details on the human evaluation or measuring correctness differently? | Summary Of The Paper
Review | Summary Of The Paper
This paper studies left-to-right language models for on the VirtualHome planning task. In this task, a model is given a high level goal like "get glass of milk" and it must generate a sequence of lower-level steps to execute that goal ('walk to kitchen', 'open fridge', etc.). These steps correspond to the annotated scratch programs built by crowd workers as part of VirtualHome, so as such they are executable plans to achieve the given goal.
This paper uses a left-to-right LM to generate those lower-level steps, given one prompted example. The 1.5B-sized LMs do not seem to do very well at generating actions given this prompt. The larger 12B, etc. LMs do better, but frequently deviate from the set of valid actions, so the authors introduce a Translation LM that measures the similarity between any "action" that the LM generates and the set of valid actions. At each step, this translation LM is used in the loop to constrain the chosen action, and this improves executability significantly (e.g. 7% -> 73% for GPT-3 and 18% -> 78% for Codex). Helpful analysis breaks down this performance.
Review
Strengths:
The paper studies an idea that seems relatively novel to this reviewer -- how well can models generate action plans in an embodied environment (not just code, or language)?
The paper presents a simple idea -- using a translation LM to constrain the generated actions -- that greatly improves performance, by ensuring that the actions generated by the LM are valid within the environment.
The paper has results that consider LMs of a variety of different sizes, from 1.5B to 175B, including both GPT-3 and codex families. The results seem to suggest that size is important, at least in this setting, insofar as the smaller models just copy from the input prompt.
This paper presents analysis about the kinds of programs generated by LMs which I think might help future work build off of this direction.
Weaknesses:
The experimental setup proposed by this paper for measuring grounding might not be ideal. In the VirtualHome setup, the action space is composed of mid-level actions (like "walk to home office") versus low-level actions ("walk forward 1m" ,etc), so the promise of LMs for embodied understanding might not generalize to more difficult environents (like Alfred; Shridhar et al 2020a). That said, I think this limitation is discussed well in the limitations section (Sec7).
Though I appreciate the authors' work on making the evaluation robust, it is not clear to this reviewer that it is a good measure of correctness. To the best of this reviewer's understanding (of Section 2.2; Correctness), in the VirtualHome environment that the authors study, it is nontrivial to compare whether the final environment state of a generated program is similar to the "ground truth" environment state. The details of the human evaluation are a bit unclear, but it seems like humans are just given the list of actions rather than the world state after executing those actions. In this setup, it seems like there is a very strong bias towards language that 'looks like' real action plans, rather than language that is truly actionable. The result is that models outperform humans at the task. To this reviewer, the paper could be improved if the correctness measured something grounded, rather than just something that looks grounded.
Clarification:
I'd appreciate more details on the human evaluation somewhere (like the interface being shown to the turkers) |
ICLR | Title
Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement
Abstract
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
N/A
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
1 INTRODUCTION
With the advent of advanced imaging sensors and displays, Ultra-High-Definition (UHD) imaging has witnessed rapid development in recent years. While UHD imaging offers broad applications and makes a significant difference in picture quality, the extra pixels also challenge the efficiency of existing image processing algorithms.
In this study, we focus on one of the most challenging tasks in image restoration, namely low-light image enhancement (LLIE), where one needs to jointly enhance the luminance and remove inherent noises caused by sensors and dim environments. Further to these challenges, we lift the difficulty by demanding efficient processing in the UHD regime.
Despite the remarkable progress in low-light image enhancement (LLIE) (Li et al., 2021a), existing methods (Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022), as shown in Figure 1, show apparent drawbacks when they are used to process real-world UHD low-light images. This is because (1) most methods (Guo et al., 2020; Liu et al., 2021b; Ma et al., 2022) only focus on luminance enhancement and fail in removing noise; (2) some approaches (Wu et al., 2022; Xu et al., 2022) simultaneously enhance luminance and remove noise in the spatial domain, resulting in the suboptimal enhancement; and (3) existing methods (Wei et al., 2018; Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022) are mainly trained on low-resolution (LR) data, leading to the incompatibility with high-resolution (HR) inputs; and (4) some studies (Xu et al., 2022; Zamir et al., 2022) adopt heavy structures, thus being inefficient for processing UHD images. More discussion on related work is provided in the Appendix.
To overcome the challenges aforementioned, we present a new idea for performing LLIE in the Fourier Domain. Our approach differs significantly from existing solutions that process images in the spatial domain. In particular, our method, named as UHDFour, is motivated by our observation of two interesting phenomena in the Fourier domain of low-light noisy images: i) luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase, and ii) the amplitude patterns of images of different resolutions are similar. These observations inspire the design of our network, which handles luminance and noise separately in the Fourier domain. This design is advantageous as it avoids amplifying noise when enhancing luminance, a common issue encountered in existing spatial domain-based methods. In addition, the fact that amplitude patterns of images of different resolutions are similar motivates us to save computation by first processing in the low-resolution regime and performing essential adjustments only in the high-resolution scale.
We also contribute the first benchmark for UHD LLIE. The dataset, named UHD-LL, contains 2,150 low-noise/normal-clear 4K UHD image pairs with diverse darkness and noise levels captured in different scenarios. Unlike existing datasets (Wei et al., 2018; Lv et al., 2021; Bychkovsky et al., 2011) that either synthesize or retouch low-light images to obtain the paired input and target sets, we capture real image pairs. During data acquisition, special care is implemented to minimize geometric and photometric misalignment due to camera shake and dynamic environment. With the new UHD-LL dataset, we design a series of quantitative and quantitative benchmarks to analyze the performance of existing LLIE methods and demonstrate the effectiveness of our method.
Our contributions are summarized as follows: (1) We propose a new solution for UHD LLIE that is inspired by unique characteristics observed in the Fourier domain. In comparison to existing LLIE methods, the proposed framework shows exceptional effectiveness and efficiency in addressing the joint task of luminance enhancement and noise removal in the UHD regime. (2) We contribute the first UHD LLIE dataset, which contains 2,150 pairs of 4K UHD low-noise/normal-clear data, covering diverse noise and darkness levels and scenes. (3) We conduct a systematical analysis of existing LLIE methods on UHD data.
2 OUR APPROACH
In this section, we first discuss our observations in analyzing low-light images in the Fourier domain, and then present the proposed solution.
2.1 OBSERVATIONS IN FOURIER DOMAIN
Here we provide more details to supplement the observations we highlighted in Sec. 1. We analyze real UHD low-light images in the Fourier domain and provide a concise illustration in Figure 2. Specifically, (a) Swapping the amplitude of a low-light and noisy (low-noise) image with that of its corresponding normal-light and clear (normal-clear) image produces a normal-light and noisy (normal-noise) image and a low-light and clear (low-clear) image. We show more examples in the Appendix. The result suggests that the luminance and noise can be decomposed to a certain extent in the Fourier domain. In particular, most luminance information is expressed as amplitudes, and
noises are revealed in phases. This inspires us to process luminance and noise separately in the Fourier domain. (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart. Such a characteristic offers us the possibility to first enhance the amplitude of an LR scale with more computations and then only make minor adjustments in the HR scale. In this way, most computations can be conducted in the LR space, reducing the computational complexity.
2.2 THE UHDFOUR NETWORK
C
FF T
C on
v
IF
FT
IN
Spatial branch
Fourier branch
FouSpa Block A
P
guidance
FF T
C on
v
IF FT
IN
C
Spatial branch
Fourier branch
��
A
P �����
Adjustment Block modulation
Overview. UHDFour aims to map an UHD low-noise input image x ∈ RH×W×C to its corresponding normal-clear version y ∈ RH×W×C , where H , W , and C represent height, width, and channel, respectively. Figure 3 shows the overview of UHDFour. It consists of an LRNet and an HRNet.
Motivated by the observation in Sec. 2.1, LRNet takes the most computation of the whole network. Its input is first embedded into the feature domain by a Conv layer. To reduce computational complexity, we downsample the features to 1/8 of the original resolution by bilinear interpolation. Then, the LR features go through an encoder-decoder network, which contains four FouSpa Blocks with two 2×downsample and two 2×upsample operations, obtaining outputs features. The outputs features are respectively fed to FFT to obtain the refined amplitude Ar and phase Pr features and a Conv layer to estimate the LR normal-clear image ŷ8 ∈ RH/8×W/8×C . The outputs of LRNet coupled with the input are fed to the HRNet. Specifically, the input x is first reshaped to xpu ∈ RH×W×C×64 via PixelUnshuffle (8× ↓) to preserve original information, and then fed to an Adjustment Block. With the refined amplitude Ar and phase Pr features, the Adjustment Block produces adjusted features that are reshaped to the original height and width of input x via Pixelshuffle (8× ↑). Finally, we resize the estimated LR normal-clear image ŷ8 to the original size of input x via bilinear interpolation and combine it with the upsampled features to estimate the final HR normal-clear image ŷ. We detail the key components as follows.
FouSpa Block. In Sec. 2.1, we observe that luminance and noise can be decomposed in the Fourier domain. Hence, we design the FouSpa Block to parallelly implement amplitude and phase enhancement in the Fourier domain and feature enhancement in the spatial domain. As shown in Figure 4(a), the input features are forked into the Fourier and Spatial branches. In the Fourier branch, FFT is first used to obtain the amplitude component (A) and phase component (P ). The two components are separately fed to two Conv layers with 1×1 kernel. Note that when processing amplitude and phase, we only use 1×1 kernel to avoid damaging the structure information. Then, we transform them back to the spatial domain via IFFT and concatenate them with the spatial features enhanced by a Half Instance Normalization (HIN) unit (Chen et al., 2021a). We adopt the HIN unit based on its efficiency. The concatenated features are further fed to a Conv layer and then combined with the input features in a residual manner. Although our main motivations are in the Fourier domain, the use of the spatial branch is necessary. This is because the spatial branch and Fourier branch are complementary. The spatial branch adopts convolution operations that can model the structure dependency well in spatial domain. The Fourier branch can attend global information and benefit the disentanglement of energy and degradation.
Adjustment Block. The Adjustment Block is the main structure of the HRNet, and it is lightweight. As shown in Figure 4(b), the Adjustment Block shares a similar structure with the FouSpa Block. Differently, in the Fourier branch, with the refined amplitude Ar features obtained from the LRNet, we use Spatial Feature Transform (SFT) (Wang et al., 2018) to modulate the amplitude features of the input xpu via simple affine transformation. Such a transformation or adjustment is possible because the luminance, as global information, manifests as amplitude components, and the amplitude patterns of an HR scale and its LR scales are similar (as discussed in Sec. 2.1). Note that we cannot modulate the phase because of its periodicity. Besides, we do not find an explicit relationship between the HR scale’s phase and its LR scales. However, we empirically find that concatenating the refined phase Pr features achieved from the LRNet with the phase features of the input xpu improves the final performance. We thus apply such concatenation in our solution.
Losses. We use l1 to supervise ŷ8 and ŷ. We also add perceptual loss to supervise ŷ8 while the use of perceptual loss on ŷ is impracticable because of its high resolution. Instead, we add SSIM loss Lssim on ŷ. The final loss L is the combination of these losses:
L = ∥ŷ−y∥1+0.0004×Lssim(ŷ, y)+0.1×∥ŷ8−y8∥1+0.0002×∥VGG(ŷ8)−VGG(y8)∥2, (1)
where y is the ground truth, y8 is the 8× downsampled version of y, VGG is the pre-trained VGG19 network, in which we use four scales to supervise training Zhou et al. (2022).
3 UHD-LL DATASET
We collect a real low-noise/normal-clear paired image dataset that contains 2,150 pairs of 4K UHD data saved in 8bit sRGB format. Several samples are shown in Figure 5.
Images are collected from a camera mounted on a tripod to ensure stability. Two cameras, i.e.., a Sony α7 III camera and a Sony Alpha a6300 camera, are used to offer diversity. The ground truth (or normal-clear) image is captured with a small ISO ∈ [100, 800] in a bright scene (indoor or outdoor). The corresponding low-noise image is acquired by increasing the ISO ∈ [1000, 20000] and reducing the exposure time. Due to the constraints of exposure gears in the cameras, shooting in the large ISO range may produce bright images, which opposes the purpose of capturing low-light and noisy
images. Thus, in some cases, we put a neutral-density (ND) filter with different ratios on the camera lens to capture low-noise images. In this way, we can increase the ISO to generate heavier noises and simultaneously obtain extremely dark images, enriching the diversity of darkness and noise levels.
The main challenge of collecting paired data is to reduce misalignment caused by camera shakes and dynamic objects. We take several measures to ameliorate the issue. Apart from using a tripod, we also use remote control software (Imaging Edge) to adjust the exposure time and ISO value to avoid any physical contact with the camera. To further reduce subtle misalignments, we adopt an image alignment algorithm (Evangelidis & Psarakis, 2008) to estimate the affine matrix and align the low-light image and its ground truth. We improve the alignment method by applying AdaIN (Huang & Belongie, 2017) before the affine matrix estimation to reduce the intensity gap. Finally, we hire annotators to check all paired images carefully and discard those that still exhibit misalignments.
We split the UHD-LL dataset into two parts: 2,000 pairs for training and 115 pairs for testing. The training and test partitions are exclusive in their scene and data. We also ensure consistency in pixel intensity distribution between the training and test splits. More analysis of this data, e.g.., the pixel intensity and Signal-to-Noise Ratio (SNR) distributions, can be found in the Appendix.
A comparison between our UHD-LL dataset and existing paired low-light image datasets is presented in Table 1. The LOL dataset (two versions: LOL-v1: 500 images; LOL-v2: 789 images) is most related to our UHD-LL dataset as both focus on real low-light images with noise. The LOL-v2 contains all images of the LOL-v1. In contrast to the LOL dataset, our dataset features a more extensive collection, where diverse darkness and noise levels from rich types of scenes are considered. Moreover, the images of our dataset have higher resolutions than those from the LOL dataset. As shown in Figure 1, the models pre-trained on the LOL dataset cannot handle the cases in our UHDLL dataset due to its insufficient training data, which are low-resolution and contains mostly mild noises. Different from SID dataset that focuses on RAW data, our dataset only studies the data with RGB format. The images in the SID dataset are captured in extremely dark scenes. Its diversity of darkness levels and scenes is limited. When these RAW data with extremely low intensity are transformed into sRGB images, some information would be truncated due to the bit depth constraints of 8bit sRGB image. In this case, it is challenging to train a network for effectively mapping noise and low-light images to clear and normal-light images using these sRGB images as training data.
4 EXPERIMENTS
Implementation. We implement our method with PyTorch and train it on six NVIDIA Tesla V100 GPUs. We use an ADAM optimizer for network optimization. The learning rate is set to 0.0001. A batch size of 6 is applied. We fix the channels of each Conv layer to 16, except for the Conv layers associated with outputs. We use the Conv layer with stride = 2 and 4×4 kernels to implement the 2× downsample operation in the encoder and interpolation to implement the 2× upsample operation in the decoder in the LRNet. Unless otherwise stated, the Conv layer uses stride = 1 and 3×3 kernels. We use the training data in the UHD-LL dataset to train our model. Images are randomly cropped into patches of size 512 × 512 for training. Compared Methods. We include 14 state-of-the-art methods (21 models in total) for our benchmarking study and performance comparison. These methods includes 12 light enhancement methods: NPE (TIP’13) (Wang et al., 2013) SRIE (CVPR’16) (Fu et al., 2016), DRBN (CVPR’20) (Yang et al., 2020a), Zero-DCE (CVPR’20) (Guo et al., 2020), Zero-DCE++ (TPAMI’21) (Li et al., 2021b), RUAS (CVPR’21) (Liu et al., 2021b), Zhao et al. (ICCV’21) (Zhao et al., 2021), Enlighten-
GAN (TIP’21) (Jiang et al., 2021), Afifi et al. (CVPR’21) (Afifi et al., 2021), SCI (CVPR’22) (Ma et al., 2022), SNR-Aware (CVPR’22) (Xu et al., 2022), URetinex-Net (CVPR’22) (Wu et al., 2022) and 2 Transformers: Uformer (CVPR’22) (Wang et al., 2022) and Restormer (CVPR’22) (Zamir et al., 2022). We use their released models and also retrain them using the same training data as our method. Note that some methods provide different models trained using different datasets. Due to the heavy models used in Restormer (Zamir et al., 2022) and SNR-Aware (Xu et al., 2022), we cannot infer the full-resolution results of both methods on UHD images, despite using a GPU with 48G memory. Following previous UHD study (Zheng et al., 2021), we resort to two strategies for this situation: (1) We downsample the input to the largest size that the model can handle and then resize the result to the original resolution, denoted by the subscript ‘resize’. (2) We split the input into four patches without overlapping and then stitch the result, denoted by the subscript ‘stitch’.
Evaluation Metrics. We employ full-reference image quality assessment metrics PSNR, SSIM (Wang et al., 2004), and LPIPS (Alex version) (Zhang et al., 2018) to quantify the performance of different methods. We also adopt the non-reference image quality evaluator (NIQE) (Mittal et al., 2013) and the multi-scale image quality Transformer (MUSIQ) (trained on KonIQ-10k dataset) (Ke et al., 2021) for assessing the restoration quality. We notice that the quantitative results reported by different papers diverge. For a fair comparison, we adopt the commonly-used IQA PyTorch Toolbox1 to compute the quantitative results of all compared methods. We also test the trainable parameters and running time for processing UHD 4K data.
4.1 BENCHMARKING EXISTING MODELS
To validate the performance of existing LLIE methods that were trained using their original training data, we directly use the released models for evaluation on the UHD low-light images. These original training datasets include LOL (Wei et al., 2018), MIT-Adobe-FiveK (Bychkovsky et al., 2011), Exposure-Errors (Afifi et al., 2021), SICE (Cai et al., 2018), LSRW (Hai et al., 2024), and DarkFace (Yang et al., 2020b). EnlightenGAN uses the assemble training data from existing datasets (Wei et al., 2018; Dang-Nguyen et al., 2015; Kalantari & Ramamoorthi, 2017b; Cai et al., 2018). In addition, the LOL-v1 and LOL-v2 contain real low-light images while LOL-syn is a synthetic dataset. Due to the limited space, we only show relatively good results. As shown in Figure 6, all methods can improve the luminance of the input image. However, they fail to produce visually pleasing results. DRBN and EnlightenGAN introduce artifacts. RUAS-LOL and RUAS-DarkFace yield over-exposed results. Color deviation is observed in the results of EnlightenGAN and Afifi et al. All methods cannot handle the noise well and even amplify noise.
We also summarize the quantitative performance of different methods and verify the effectiveness of commonly used non-reference metrics for UHD low-light images in Table 2. URetinex-Net achieves the highest PSNR score while SNR-Aware-LOLv1 is the best performer in terms of SSIM and
1https://github.com/chaofengc/IQA-PyTorch
LPIPS. For non-reference metrics, SCI-difficult, Zhao et al.-LOL, and RUAS-LOL are the winners under MUSIQ, NIQE, and NIMA, respectively. From Figure 6 and Table 2, we found the non-reference metrics designed for generic image quality assessment cannot accurately assess the subjective quality of the enhanced UHD low-light images. For example, RUAS-LOL suffers from obvious over-exposure in the result while it is the best performer under the NIMA metric.
In summary, the performance of existing released models is unsatisfactory when they are used to enhance the UHD low-light images. The darkness, noise, and artifacts still exist in the results. Compared with luminance enhancement, noise is the more significant challenge for these methods. No method can handle the noise issue well. The joint task of luminance enhancement and noise removal raises a new challenge for LLIE, especially under limited computational resources. We also observe a gap between visual results and the scores of non-reference metrics for UHD LLIE. The gap calls for more specialized non-reference metrics for UHD LLIE.
4.2 COMPARING RETRAINED MODELS
Besides the released models, we also retrain existing methods on our UHD-LL training data and compare their performance with our method. Due to the limited space, we only compare our method with several good performers. More results can be found in the Appendix. As shown in Figure 7, our UHDFour produces a clear and normal-light result close to the ground truth. In comparison, ZeroDCE++, RUAS, Afifi et al., SCI, and Restormer experience color deviations. Zero-DCE, ZeroDCE++, RUAS, Zhao et al., Afifi et al., and SCI cannot remove the noise due to the limitations of their network designs. These methods mainly focus on luminance enhancement. SNR-Aware, Uformer, and Restormer have strong modeling capability because of the use of Transformer structures. However, the three methods still leave noise on the results and introduce artifacts.
The quantitative comparison is presented in Table 3. Our UHDFour achieves state-of-the-art performance in terms of PSNR, SSIM, and LPIPS scores and outperforms the compared methods with a
large margin. The Transformer-based SNR-Aware and Restormer rank the second best. Our method has the fastest processing speed for UHD images as most computation is conducted in the LR space.
To further verify the effectiveness of our network, we compare our approach with several methods, including Retinex-Net Wei et al. (2018), Zero-DCE (Guo et al., 2020), AGLLNet (Lv et al., 2021), Zhao et al. (Zhao et al., 2021), RUAS (Liu et al., 2021b), SCI (Ma et al., 2022), and URetinex-Net (Wu et al., 2022), that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets (Wei et al., 2018). Due to the mild noise and low-resolution images in
the LOL-v1 and LOL-v2 datasets, we change the 8× downsample and upsample operations to 2× and retrain our network. And such characteristics of LOL-v1 and LOL-v2 datasets prohibit us from showing the full potential of our method in removing noise and processing high-resolution images. Even though our goal is not to pursue state-of-the-art performance on the LOL-v1 and LOL-v2 datasets, our method achieves satisfactory performance as presented in Table 4. The visual results are provided in the Appendix.
4.3 ABLATION STUDY
We present ablation studies to demonstrate the effectiveness of the main components in our design. For the FouSpa Block, we remove the Fourier branch (FB) (#1), remove the Spatial branch (SB) (#2), and replace the FouSpa Block (i.e.., without FB and SB) with the Residual Block of comparable parameters (#3). We also replace the FB with the SB (i.e.., using two SB) (#4). For the Adjustment Block, we remove the Amplitude Modulation (AM) (#5), remove the Phase Guidance (PG) (#6), remove the SB (#7), and remove both AM and PG (#8). We also replace the AM and PG with two SB (#9), and
replace the Adjustment Block with the Residual Block of comparable parameters (#10). For the final output, we remove the concatenation of the LR normal-clear result (ŷ8), indicated as #11. We also replace all FB with SB, indicated as #12. Unless otherwise stated, all training settings remain unchanged as the implementation of full model, denoted as #13.
The quantitative comparison of the ablated models on the UHD-LL testing set is presented in Table 5. We also show the visual comparison of some ablated models in the Appendix. As shown, all the key designs contribute to the best performance of the full model. Without the Fourier branch (#1), the quantitative scores significantly drop. The result suggests that processing amplitude and phase separately improves the performance of luminance enhancement and noise removal. From the results of #2, the Spatial branch also boosts the performance. However, replacing the FouSpa Block with the Residual Block (#3) cannot achieve comparable performance with the full model (#13), indicating the effectiveness of the FouSpa Block. For the Adjustment Block, the Amplitude Modulation (#5), Phase Guidance (#6), and Spatial branch (#7) jointly verify its effectiveness. Such a block cannot be replaced by a Residual Block (#10). From the results of #11, we can see that it is necessary to estimate the LR result. In addition, replacing the Fourier branch with spatial branch (#4,#9,#12) cannot achieve comparable performance with the full model (#13), showing the efficacy of Fourier branch.
5 CONCLUSION
The success of our method is inspired by the characteristics of real low-light and noisy images in the Fourier domain. Thanks to the unique design of our network that handles luminance and noises in the Fourier domain, it outperforms state-of-the-art methods in UHD LLIE with appealing efficiency. With the contribution of the first real UHD LLIE dataset, it becomes possible to compare existing methods with real UHD low-light images. Our experiments are limited to image enhancement; we have not provided data and benchmarks in the video domain. Our exploration has not considered adversarial losses due to memory constraints. Moreover, as our data is saved in sRGB format, the models trained on our data may fail in processing the extreme cases, in which the information is lost due to the limited bit depth. HDR data may be suitable for these cases. Nevertheless, we believe our method and the dataset can bring new opportunities and challenges to the community. The usefulness of Fourier operations may go beyond our work and see potential in areas like image decomposition and disentanglement. With improved efficiency, it may be adopted for applications that demand real-time response, e.g.., enhancing the perception of autonomous vehicles in the dark.
6 ETHICS STATEMENT
This study focuses on low-light image enhancement and does not involve any ethics issues. The dataset proposed in this paper also does not involve privacy issues.
7 ACKNOWLEDGEMENT
This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also partially supported by the NTU NAP grant. Chun-Le Guo is also supported by MindSpore, CANN, and Ascend AI Processor.
A RELATED WORK
Low-Light Image Enhancement Methods. The focus of our work is on deep learning-based LLIE methods (Li et al., 2021a; Liu et al., 2021a). Wang et al. (2019) proposed a network to enhance the underexposed photos by estimating an image-to-illumination mapping. EnlightenGAN (Jiang et al., 2021) proposed an attention-guided network to make the generated results indistinguishable from real normal-light images. By formulating light enhancement as a task of image-specific curve estimation that can be trained with non-reference losses, Zero-DCE (Guo et al., 2020) and Zero-DCE++ (Li et al., 2021b) obtain good brightness enhancement. Zhao et al. (2021) treated underexposed image enhancement as image feature transformation between the underexposed image and its paired enhanced version. Liu et al. (2021b) proposed a Retinex model-inspired unrolling method, in which the network structure is obtained by neural architecture search. Afifi et al. (2021) proposed a coarseto-fine network for exposure correction. Ma et al. (2022) proposed a self-calibrated illumination learning framework using unsupervised losses. Wu et al. (2022) combined the Retinex model with a deep unfolding network, which unfolds an optimization problem into a learnable network. Xu et al. (2022) proposed to exploit the Signal-to-Noise Ratio (SNR)-aware Transformer and convolutional models for LLIE. In this method, long-range attention is used for the low SNR regions while the short-range attention (convolutional layers) is for other regions.
Different from these works, our network takes the challenging joint luminance enhancement, noise removal, and high resolution constraint of UHD low-light images into account in the Fourier domain, endowing new insights on UHD LLIE and achieving better performance.
Low-Light Image Enhancement Datasets. LOL (Wei et al., 2018) dataset contains pairs of low/normal-light images saved in RGB format, in which the low-light images are collected by changing the exposure time and ISO. Due to the small size, it only covers a small fraction of the noise and darkness levels. MIT-Adobe FiveK (Bychkovsky et al., 2011) dataset includes paired low-/highquality images, where the high-quality images are retouched by five experts. The low-quality images are treated as low-light images in some LLIE methods. However, this dataset is originally collected for global tone adjustment, and thus, it ignores noise in its collection. Based on the MIT-Adobe FiveK dataset, a multi-exposure dataset, Exposure-Errors, is rendered to emulate a wide range of exposure errors (Afifi et al., 2021). Similar to the MIT-Adobe FiveK dataset, the Exposure-Errors dataset also neglects the noise issue. SID (Chen et al., 2018) is a RAW data dataset.
The images of the SID dataset have two different sensor patterns (i.e.., Bayer pattern and APS-C X-Trans pattern). Due to the specific data pattern, the deep models trained on this dataset are not versatile as they require the Raw data with the same pattern as input. Besides, a long-exposure reference image corresponds to multiple short-exposure images, leading to limited scene diversity.
Unlike existing LLIE datasets that either omit noise, capture limited numbers of images, require specific sensor patterns, or exclude UHD images, we propose a real UHD LLIE dataset that contains low-noise/normal-clear image pairs with diverse darkness and noise levels captured in different scenarios. A comprehensive comparison is presented in Sec. 3.
Image Decomposition-based Enhancement. There are some image decomposition-based enhancement methods. For LLIE, Xu et al. (2020) proposed a frequency-based decomposition-andenhancement model, which suppresses noises in the low-frequency layer and enhances the details in the high-frequency layer. Yang et al. (2020a) proposed a band representation-based semi-supervised model. This method consists of two stages: recursive band learning and band recomposition. Wei et al. (2018) also decomposed a low-light image into an illumination component and a reflectance component according to the Retinex model and then separately enhance the components. Image decomposition was also used in shadow removal, in which two networks are used to predict shadow parameters and matter layer (Le & Samaras, 2019).
In addition, assuming that phase preserves high-level semantics while the amplitude contains lowlevel features, (Guo et al., 2022) proposed a FPNet that consists of two stages for image de-raining. The first state is to restore the amplitude of rainy images. The second state then refines the phase of
the restored rainy images. To solve the limitations of ResBlock that may overlook the low-frequency information and fails to model the long-distance information, (Mao et al., 2021) proposed a Residual Fast Fourier Transform with Convolution Block for image deblurring. The Block contains a spatial residual stream and a FFT stream. (Pham et al., 2021) proposed a complex valued neural network with Fourier transform for image denoising. The complex valued network first converts the noisy image to complex value via Fourier transform, then estimates a complex filter which is applied to the converted noisy image for approaching the complex value of the ground truth image.
Although these methods decompose an image or use Fourier transform in the networks, our design has different motivations. Our motivations are inspired by the uniqueness of the Fourier domain for UHD low-light image enhancement as presented in Figure 2, i.e., luminance and noise can be ‘decomposed’ to a certain extent in the Fourier domain and HR image and its LR versions share similar amplitude patterns. We also design the specific blocks to use these characteristics and solve the high-resolution constraint issue, which have not been explored in previous works.
HDR Imaging. There are some high dynamic range (HDR) reconstruction works (Salih et al., 2012; Wang & Yoon, 2021) that are related to our low-light image enhancement. The HDR reconstruction also can be grouped into multi-image HDR reconstruction and single-image reconstruction. The multi-image methods require to fuse the multiple bracketed exposure low dynamic range (LDR) images. To mitigate artifacts caused by image fusion, several technologies (Srikantha & Sidibe, 2012) have been proposed. For single-image methods, deep learning has achieved impressive performance. In addition to learning end-to-end LDR-to-HDR networks (Kalantari & Ramamoorthi, 2017a; Wu et al., 2018; Yang et al., 2018; Zhang & Lalonde, 2017; Hu et al., 2022b; Eilertsen et al., 2017) some methods either synthesize multiple LDR images with different exposures (Endo et al., 2017) or model the inverse process of the image formation pipeline (Liu et al., 2020). Besides, some works also focus on HDR reconstruction and denoising. For example, (Hu et al., 2022a) proposes a joint multi-scale denoising and tone-mapping framework, which prevents the tone-mapping operator from overwhelming the denoising operator. (Chen et al., 2021b) takes the noise and quantization into consideration for designing the HDR reconstruction network.
B ANALYSIS OF UHD-LL DATASET
We first show the intensity histograms and the SNR distribution of our UHD-LL dataset in Figure 8. The SNR is computed using the same algorithm as the recent LLIE method (Xu et al., 2022). As shown in Figure 8(a) and Figure 8(b), when splitting the training and testing sets, we make the pixel intensity distributions of the training and testing sets consistent to guarantee the rationality of the dataset split. We also plot the SNR distribution of the dataset to show the noise levels in Figure 8(c). The SNR distribution suggests the wide and challenging SNR ranges of our dataset. We show more samples and amplify the resolution of our UHD-LL data in Figure 9.
C FURTHER ANALYSIS OF MOTIVATION
Recall that in Sec. 2.1, we discussed two observations that serve as the motivation to design our network. In particular, (a) Swapping the amplitude of a low-noise image with that of its corresponding normal-clear image produces a normal-noise image and a low-clear image, and (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart.
We first show more motivation cases in Figures 10, 11, and 12. These visual results suggest the same tendency as the motivations shown in the main paper.
low-noise
normal-clear low-clear
normal-noise
Amplitude
Amplitude
Phase
Phase
(a)
real
real
compositional
compositional Amplitude Amplitude
normal-clear low-noisenormal-clear normal-clear
4X downsample 8X downsample
(b)
full resolution full resolution
Amplitude Amplitude
To further analyze our first motivation, we compare the luminance and noise of real normal-clear and low-noise images and compositional low-clear and normal-noise images. To compare the luminance similarity, we compute the average luminance. For real noise level measurement, there is no corresponding metric. Thus, we use the recent multi-scale Transformer MUSIQ (K et al., 2021) for image quality assessment. MUSIQ is not sensitive to luminance changes. Moreover, it can be used to measure the noise level as its training dataset contains noisy data and it shows state-of-the-art performance for assessing the quality of natural images. A large MUSIQ value reflects better image quality with less noise and artifacts. We select 50 images from the UHD-LL dataset and compare the average scores. We present the quantitative results in Table 6.
As presented in Table 6, the real normal-clear images have similar luminance values with the compositional normal-noise images while they have similar high MUSIQ values with the compositional low-clear images. Similarly, the real low-noise images have similar luminance values with the compositional low-clear images while they have similar low MUSIQ values with the compositional normal-noise images. The results further suggest that luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase.
For the second motivation, it is difficult to quantify the similarity of amplitude spectrum of different sizes as it cannot be directly interpolated. The full-reference metrics cannot be used in this situation. Hence, we show more visual examples in Figure 10(b), Figure 11(b), and Figure 12(b). The extra examples support our motivation.
D VISUALIZATION IN THE NETWORK
We show the changes of amplitude and phase in our proposed UHDFour network in Figure 13. As shown, the amplitude and phase of our final result are similar with those of ground truth. Moreover, the amplitude and phase of the low-resolution output ŷ8 are also similar with those of its corresponding ground truth y8. We wish to emphasize that although noise is related to phase, it cannot be explicitly represented in phase imagery format as phase represents the initial position of the wave. Only the combination of amplitude and phase can express a complete image. Moreover, in the feature domain, such relevance is more difficult to be represented in an imagery format. Thus, we suggest to see the similarity between the final result and ground truth, instead of the intermediate features and phase.
E MORE RESULTS ON RELEASED MODELS
We present more comparisons between state of the arts for restoring UHD low-light images in Figures 14, 15, and 16. This is similar to Figure 6 of the main paper where we compare methods using their original released models. As shown, all existing models cannot handle the UHD low-light images well. Since we cannot infer the full-resolution results of SNR-Aware Xu et al. (2022) on UHD images, despite using a GPU with 48G memory, some obvious borders appear in its results due to the stitching strategy. The phenomenon also indicates the commonly-used stitching strategy in previous UHD data processing methods is inapplicable to the challenging UHD low-light image enhancement task.
F MORE RESULTS ON RETRAINED MODELS ON UHD-LL
We provide more visual comparisons of our method with retrained state-of-the-art methods on the UHD-LL dataset in Figures 17 and 18.
As the results shown, for UHD low-light image enhancement, the retrained models on the UHD-LL dataset still cannot achieve satisfactory results. Noise and artifacts can still be found in their results. The results suggest that joint luminance enhancement and noise removal in the spatial domain is difficult. Our solution effectively handles this challenging problem by embedding Fourier transform in a cascaded network, in which luminance and noise can be decomposed to a certain extent and are processed separately.
G MORE RESULTS ON RETRAINED MODELS ON LOL-V1 AND LOL-V2 DATASETS
We also provide more visual comparisons of our method with the models that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets in Figures 19, 20, and 21.
As for the low-light images in LOL-v1 and LOL-v2 datasets, even though the mild noise and lowresolution images prohibit us from showing the full potential of our method in removing noise and processing high-resolution images, our method still achieves satisfactory performance. The results suggest the potential of our solution in different circumstances.
H ABLATION STUDY
We show some visual results of the ablated models in Figure 22. Without the Fourier branch (#1), the ablated model cannot effectively enhance luminance and remove noise. Although the result of
#2 looks better than #1, the Spatial branch (#2) also affects the final result. Directly replacing the FouSpa Block with the Residual Block of comparable parameters (#3) cannot obtain a satisfactory result, suggesting the good performance of the FouSpa Block is not because of the use of more parameters. Removing the Amplitude Modulation (#5) results in the visually unpleasing result. The Phase Guidance (#6) and Spatial branch (#7) in the Adjustment Block also contribute to the good performance of the full model. Directly replacing the Adjustment Block with the Residual Block of comparable parameters (#10) still cannot obtain a satisfactory result. The introduction of the estimation of low-resolution result (#11) also leads to a clear result. The visual comparisons further show the significance of the proposed FouSpa Block and Adjustment Block in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient.
Additionally, we list the computational and time costs of FFT/IFFT operations for processing different scales of features. In our network, we fix the feature channels to 16 and use three different scales 8×, 16×, and 32× downsampled original resolution (i.e., 3840 × 2160). The results are presented in Table 7. The FFT and IFFT operations have same computational cost. The difference in running time may be because of the different optimization strategies used in PyTorch. | 1. What is the main contribution of the paper regarding joint luminance enhancement and denoising in low-light imaging?
2. What are the strengths and weaknesses of the proposed network architecture, UHDFour?
3. What are the limitations of the paper regarding its claims, experiments, and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the paper that the reviewer has? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
To tackle the problem of joint luminance enhancement and denoising in low-light imaging, the author proposed 1) a new network architecture, UHDFour, that utilizes authors' observations on noise and signal luminance's relation in the amplitude and phase images in the frequency domain, and 2) a new dataset, UHD-LL, that contains high-resolution 4K low-light / normal light pairs.
Strengths And Weaknesses
Strength
A reasonably sized, high-resolution new dataset for joint luminance enhancement and denoising is provided.
The proposed network architecture cleverly mixes and matches the amplitude and phase of the low light input and normal light GT to constrain the denoising network.
Weakness
The importance of the Fourier branch (FB) compared to a more conventional spatial branch (SB) in FourSpa is less than expected and should be further validated. Swapping FB with SB brings about a
0.6
d
B
difference in PSNR, as ablation studies #1 and #2 show, which made me wonder if 2xSB (i.e. increase the size of the SB) would have similar performance as FB+SB in FourSpa. Moreover, for the adj. block, it's necessary to show the performance of SB only and SB increase to the comparable size of SB+AM+PG. Lastly, there should be a Fourier-free architecture where there are only SBs (of the equivalent size of SB + FB + AM + PG + SB, e.g. #9) in both FourSpa and Adj. Block. The author should consider adding those in the final version.
What are the computational and time costs of FFT/IFFT-related operations? It will be useful if the author can add this to the ablation study table.
Even for consumer audio/video market, "HDR" and "4K" are oftentimes mentioned together. Moreover, HDR is common for low-light scenes as light sources can often be seen in night-time photography. However, HDR scenes are not considered in this paper. I suggest the author mention such drawbacks of the proposed dataset in discussion and/or indication of the dynamic range of the proposed datasets (i.e. the specs of the cameras used for the data capture).
Also, since an important aspect of this paper is luminance enhancement, it's natural to discuss HDR tone-mapping. A simple search on joint HDR and deoising leads to many recent papers on the topics. I suggest the authors consider including such papers in the related work section.
Denoising research is at a stage where one has to "pixel peep" to tell the difference. Please provide zoom-in insets of the denoising performance in the result figures.
The proposed UHDFour method is only tested on one external dataset, LOL. It's reasonable to include at least another state-of-the-art dataset
For the dataset comparison table, the authors should add SID datasets. The argument that SID only works for particular Bayer pattern is not valid. SID data's Bayer or Fuji pattern raw can be demosaiced to RGB raw, meanwhile All the authors' data also undergoes some demosaicing too.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and easy to follow. The quality is up to ICLR's bar. The method, to my knowledge, is novel, and the datasets would be a nice addition to the research community in low-light imaging. Detailed network layer description or table, or source code instead, should be provided, otherwise, reproducing the paper's results can be difficult given the complexity of the proposed network architecture and various comparison and ablation studies performed. |
ICLR | Title
Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement
Abstract
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
N/A
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
1 INTRODUCTION
With the advent of advanced imaging sensors and displays, Ultra-High-Definition (UHD) imaging has witnessed rapid development in recent years. While UHD imaging offers broad applications and makes a significant difference in picture quality, the extra pixels also challenge the efficiency of existing image processing algorithms.
In this study, we focus on one of the most challenging tasks in image restoration, namely low-light image enhancement (LLIE), where one needs to jointly enhance the luminance and remove inherent noises caused by sensors and dim environments. Further to these challenges, we lift the difficulty by demanding efficient processing in the UHD regime.
Despite the remarkable progress in low-light image enhancement (LLIE) (Li et al., 2021a), existing methods (Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022), as shown in Figure 1, show apparent drawbacks when they are used to process real-world UHD low-light images. This is because (1) most methods (Guo et al., 2020; Liu et al., 2021b; Ma et al., 2022) only focus on luminance enhancement and fail in removing noise; (2) some approaches (Wu et al., 2022; Xu et al., 2022) simultaneously enhance luminance and remove noise in the spatial domain, resulting in the suboptimal enhancement; and (3) existing methods (Wei et al., 2018; Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022) are mainly trained on low-resolution (LR) data, leading to the incompatibility with high-resolution (HR) inputs; and (4) some studies (Xu et al., 2022; Zamir et al., 2022) adopt heavy structures, thus being inefficient for processing UHD images. More discussion on related work is provided in the Appendix.
To overcome the challenges aforementioned, we present a new idea for performing LLIE in the Fourier Domain. Our approach differs significantly from existing solutions that process images in the spatial domain. In particular, our method, named as UHDFour, is motivated by our observation of two interesting phenomena in the Fourier domain of low-light noisy images: i) luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase, and ii) the amplitude patterns of images of different resolutions are similar. These observations inspire the design of our network, which handles luminance and noise separately in the Fourier domain. This design is advantageous as it avoids amplifying noise when enhancing luminance, a common issue encountered in existing spatial domain-based methods. In addition, the fact that amplitude patterns of images of different resolutions are similar motivates us to save computation by first processing in the low-resolution regime and performing essential adjustments only in the high-resolution scale.
We also contribute the first benchmark for UHD LLIE. The dataset, named UHD-LL, contains 2,150 low-noise/normal-clear 4K UHD image pairs with diverse darkness and noise levels captured in different scenarios. Unlike existing datasets (Wei et al., 2018; Lv et al., 2021; Bychkovsky et al., 2011) that either synthesize or retouch low-light images to obtain the paired input and target sets, we capture real image pairs. During data acquisition, special care is implemented to minimize geometric and photometric misalignment due to camera shake and dynamic environment. With the new UHD-LL dataset, we design a series of quantitative and quantitative benchmarks to analyze the performance of existing LLIE methods and demonstrate the effectiveness of our method.
Our contributions are summarized as follows: (1) We propose a new solution for UHD LLIE that is inspired by unique characteristics observed in the Fourier domain. In comparison to existing LLIE methods, the proposed framework shows exceptional effectiveness and efficiency in addressing the joint task of luminance enhancement and noise removal in the UHD regime. (2) We contribute the first UHD LLIE dataset, which contains 2,150 pairs of 4K UHD low-noise/normal-clear data, covering diverse noise and darkness levels and scenes. (3) We conduct a systematical analysis of existing LLIE methods on UHD data.
2 OUR APPROACH
In this section, we first discuss our observations in analyzing low-light images in the Fourier domain, and then present the proposed solution.
2.1 OBSERVATIONS IN FOURIER DOMAIN
Here we provide more details to supplement the observations we highlighted in Sec. 1. We analyze real UHD low-light images in the Fourier domain and provide a concise illustration in Figure 2. Specifically, (a) Swapping the amplitude of a low-light and noisy (low-noise) image with that of its corresponding normal-light and clear (normal-clear) image produces a normal-light and noisy (normal-noise) image and a low-light and clear (low-clear) image. We show more examples in the Appendix. The result suggests that the luminance and noise can be decomposed to a certain extent in the Fourier domain. In particular, most luminance information is expressed as amplitudes, and
noises are revealed in phases. This inspires us to process luminance and noise separately in the Fourier domain. (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart. Such a characteristic offers us the possibility to first enhance the amplitude of an LR scale with more computations and then only make minor adjustments in the HR scale. In this way, most computations can be conducted in the LR space, reducing the computational complexity.
2.2 THE UHDFOUR NETWORK
C
FF T
C on
v
IF
FT
IN
Spatial branch
Fourier branch
FouSpa Block A
P
guidance
FF T
C on
v
IF FT
IN
C
Spatial branch
Fourier branch
��
A
P �����
Adjustment Block modulation
Overview. UHDFour aims to map an UHD low-noise input image x ∈ RH×W×C to its corresponding normal-clear version y ∈ RH×W×C , where H , W , and C represent height, width, and channel, respectively. Figure 3 shows the overview of UHDFour. It consists of an LRNet and an HRNet.
Motivated by the observation in Sec. 2.1, LRNet takes the most computation of the whole network. Its input is first embedded into the feature domain by a Conv layer. To reduce computational complexity, we downsample the features to 1/8 of the original resolution by bilinear interpolation. Then, the LR features go through an encoder-decoder network, which contains four FouSpa Blocks with two 2×downsample and two 2×upsample operations, obtaining outputs features. The outputs features are respectively fed to FFT to obtain the refined amplitude Ar and phase Pr features and a Conv layer to estimate the LR normal-clear image ŷ8 ∈ RH/8×W/8×C . The outputs of LRNet coupled with the input are fed to the HRNet. Specifically, the input x is first reshaped to xpu ∈ RH×W×C×64 via PixelUnshuffle (8× ↓) to preserve original information, and then fed to an Adjustment Block. With the refined amplitude Ar and phase Pr features, the Adjustment Block produces adjusted features that are reshaped to the original height and width of input x via Pixelshuffle (8× ↑). Finally, we resize the estimated LR normal-clear image ŷ8 to the original size of input x via bilinear interpolation and combine it with the upsampled features to estimate the final HR normal-clear image ŷ. We detail the key components as follows.
FouSpa Block. In Sec. 2.1, we observe that luminance and noise can be decomposed in the Fourier domain. Hence, we design the FouSpa Block to parallelly implement amplitude and phase enhancement in the Fourier domain and feature enhancement in the spatial domain. As shown in Figure 4(a), the input features are forked into the Fourier and Spatial branches. In the Fourier branch, FFT is first used to obtain the amplitude component (A) and phase component (P ). The two components are separately fed to two Conv layers with 1×1 kernel. Note that when processing amplitude and phase, we only use 1×1 kernel to avoid damaging the structure information. Then, we transform them back to the spatial domain via IFFT and concatenate them with the spatial features enhanced by a Half Instance Normalization (HIN) unit (Chen et al., 2021a). We adopt the HIN unit based on its efficiency. The concatenated features are further fed to a Conv layer and then combined with the input features in a residual manner. Although our main motivations are in the Fourier domain, the use of the spatial branch is necessary. This is because the spatial branch and Fourier branch are complementary. The spatial branch adopts convolution operations that can model the structure dependency well in spatial domain. The Fourier branch can attend global information and benefit the disentanglement of energy and degradation.
Adjustment Block. The Adjustment Block is the main structure of the HRNet, and it is lightweight. As shown in Figure 4(b), the Adjustment Block shares a similar structure with the FouSpa Block. Differently, in the Fourier branch, with the refined amplitude Ar features obtained from the LRNet, we use Spatial Feature Transform (SFT) (Wang et al., 2018) to modulate the amplitude features of the input xpu via simple affine transformation. Such a transformation or adjustment is possible because the luminance, as global information, manifests as amplitude components, and the amplitude patterns of an HR scale and its LR scales are similar (as discussed in Sec. 2.1). Note that we cannot modulate the phase because of its periodicity. Besides, we do not find an explicit relationship between the HR scale’s phase and its LR scales. However, we empirically find that concatenating the refined phase Pr features achieved from the LRNet with the phase features of the input xpu improves the final performance. We thus apply such concatenation in our solution.
Losses. We use l1 to supervise ŷ8 and ŷ. We also add perceptual loss to supervise ŷ8 while the use of perceptual loss on ŷ is impracticable because of its high resolution. Instead, we add SSIM loss Lssim on ŷ. The final loss L is the combination of these losses:
L = ∥ŷ−y∥1+0.0004×Lssim(ŷ, y)+0.1×∥ŷ8−y8∥1+0.0002×∥VGG(ŷ8)−VGG(y8)∥2, (1)
where y is the ground truth, y8 is the 8× downsampled version of y, VGG is the pre-trained VGG19 network, in which we use four scales to supervise training Zhou et al. (2022).
3 UHD-LL DATASET
We collect a real low-noise/normal-clear paired image dataset that contains 2,150 pairs of 4K UHD data saved in 8bit sRGB format. Several samples are shown in Figure 5.
Images are collected from a camera mounted on a tripod to ensure stability. Two cameras, i.e.., a Sony α7 III camera and a Sony Alpha a6300 camera, are used to offer diversity. The ground truth (or normal-clear) image is captured with a small ISO ∈ [100, 800] in a bright scene (indoor or outdoor). The corresponding low-noise image is acquired by increasing the ISO ∈ [1000, 20000] and reducing the exposure time. Due to the constraints of exposure gears in the cameras, shooting in the large ISO range may produce bright images, which opposes the purpose of capturing low-light and noisy
images. Thus, in some cases, we put a neutral-density (ND) filter with different ratios on the camera lens to capture low-noise images. In this way, we can increase the ISO to generate heavier noises and simultaneously obtain extremely dark images, enriching the diversity of darkness and noise levels.
The main challenge of collecting paired data is to reduce misalignment caused by camera shakes and dynamic objects. We take several measures to ameliorate the issue. Apart from using a tripod, we also use remote control software (Imaging Edge) to adjust the exposure time and ISO value to avoid any physical contact with the camera. To further reduce subtle misalignments, we adopt an image alignment algorithm (Evangelidis & Psarakis, 2008) to estimate the affine matrix and align the low-light image and its ground truth. We improve the alignment method by applying AdaIN (Huang & Belongie, 2017) before the affine matrix estimation to reduce the intensity gap. Finally, we hire annotators to check all paired images carefully and discard those that still exhibit misalignments.
We split the UHD-LL dataset into two parts: 2,000 pairs for training and 115 pairs for testing. The training and test partitions are exclusive in their scene and data. We also ensure consistency in pixel intensity distribution between the training and test splits. More analysis of this data, e.g.., the pixel intensity and Signal-to-Noise Ratio (SNR) distributions, can be found in the Appendix.
A comparison between our UHD-LL dataset and existing paired low-light image datasets is presented in Table 1. The LOL dataset (two versions: LOL-v1: 500 images; LOL-v2: 789 images) is most related to our UHD-LL dataset as both focus on real low-light images with noise. The LOL-v2 contains all images of the LOL-v1. In contrast to the LOL dataset, our dataset features a more extensive collection, where diverse darkness and noise levels from rich types of scenes are considered. Moreover, the images of our dataset have higher resolutions than those from the LOL dataset. As shown in Figure 1, the models pre-trained on the LOL dataset cannot handle the cases in our UHDLL dataset due to its insufficient training data, which are low-resolution and contains mostly mild noises. Different from SID dataset that focuses on RAW data, our dataset only studies the data with RGB format. The images in the SID dataset are captured in extremely dark scenes. Its diversity of darkness levels and scenes is limited. When these RAW data with extremely low intensity are transformed into sRGB images, some information would be truncated due to the bit depth constraints of 8bit sRGB image. In this case, it is challenging to train a network for effectively mapping noise and low-light images to clear and normal-light images using these sRGB images as training data.
4 EXPERIMENTS
Implementation. We implement our method with PyTorch and train it on six NVIDIA Tesla V100 GPUs. We use an ADAM optimizer for network optimization. The learning rate is set to 0.0001. A batch size of 6 is applied. We fix the channels of each Conv layer to 16, except for the Conv layers associated with outputs. We use the Conv layer with stride = 2 and 4×4 kernels to implement the 2× downsample operation in the encoder and interpolation to implement the 2× upsample operation in the decoder in the LRNet. Unless otherwise stated, the Conv layer uses stride = 1 and 3×3 kernels. We use the training data in the UHD-LL dataset to train our model. Images are randomly cropped into patches of size 512 × 512 for training. Compared Methods. We include 14 state-of-the-art methods (21 models in total) for our benchmarking study and performance comparison. These methods includes 12 light enhancement methods: NPE (TIP’13) (Wang et al., 2013) SRIE (CVPR’16) (Fu et al., 2016), DRBN (CVPR’20) (Yang et al., 2020a), Zero-DCE (CVPR’20) (Guo et al., 2020), Zero-DCE++ (TPAMI’21) (Li et al., 2021b), RUAS (CVPR’21) (Liu et al., 2021b), Zhao et al. (ICCV’21) (Zhao et al., 2021), Enlighten-
GAN (TIP’21) (Jiang et al., 2021), Afifi et al. (CVPR’21) (Afifi et al., 2021), SCI (CVPR’22) (Ma et al., 2022), SNR-Aware (CVPR’22) (Xu et al., 2022), URetinex-Net (CVPR’22) (Wu et al., 2022) and 2 Transformers: Uformer (CVPR’22) (Wang et al., 2022) and Restormer (CVPR’22) (Zamir et al., 2022). We use their released models and also retrain them using the same training data as our method. Note that some methods provide different models trained using different datasets. Due to the heavy models used in Restormer (Zamir et al., 2022) and SNR-Aware (Xu et al., 2022), we cannot infer the full-resolution results of both methods on UHD images, despite using a GPU with 48G memory. Following previous UHD study (Zheng et al., 2021), we resort to two strategies for this situation: (1) We downsample the input to the largest size that the model can handle and then resize the result to the original resolution, denoted by the subscript ‘resize’. (2) We split the input into four patches without overlapping and then stitch the result, denoted by the subscript ‘stitch’.
Evaluation Metrics. We employ full-reference image quality assessment metrics PSNR, SSIM (Wang et al., 2004), and LPIPS (Alex version) (Zhang et al., 2018) to quantify the performance of different methods. We also adopt the non-reference image quality evaluator (NIQE) (Mittal et al., 2013) and the multi-scale image quality Transformer (MUSIQ) (trained on KonIQ-10k dataset) (Ke et al., 2021) for assessing the restoration quality. We notice that the quantitative results reported by different papers diverge. For a fair comparison, we adopt the commonly-used IQA PyTorch Toolbox1 to compute the quantitative results of all compared methods. We also test the trainable parameters and running time for processing UHD 4K data.
4.1 BENCHMARKING EXISTING MODELS
To validate the performance of existing LLIE methods that were trained using their original training data, we directly use the released models for evaluation on the UHD low-light images. These original training datasets include LOL (Wei et al., 2018), MIT-Adobe-FiveK (Bychkovsky et al., 2011), Exposure-Errors (Afifi et al., 2021), SICE (Cai et al., 2018), LSRW (Hai et al., 2024), and DarkFace (Yang et al., 2020b). EnlightenGAN uses the assemble training data from existing datasets (Wei et al., 2018; Dang-Nguyen et al., 2015; Kalantari & Ramamoorthi, 2017b; Cai et al., 2018). In addition, the LOL-v1 and LOL-v2 contain real low-light images while LOL-syn is a synthetic dataset. Due to the limited space, we only show relatively good results. As shown in Figure 6, all methods can improve the luminance of the input image. However, they fail to produce visually pleasing results. DRBN and EnlightenGAN introduce artifacts. RUAS-LOL and RUAS-DarkFace yield over-exposed results. Color deviation is observed in the results of EnlightenGAN and Afifi et al. All methods cannot handle the noise well and even amplify noise.
We also summarize the quantitative performance of different methods and verify the effectiveness of commonly used non-reference metrics for UHD low-light images in Table 2. URetinex-Net achieves the highest PSNR score while SNR-Aware-LOLv1 is the best performer in terms of SSIM and
1https://github.com/chaofengc/IQA-PyTorch
LPIPS. For non-reference metrics, SCI-difficult, Zhao et al.-LOL, and RUAS-LOL are the winners under MUSIQ, NIQE, and NIMA, respectively. From Figure 6 and Table 2, we found the non-reference metrics designed for generic image quality assessment cannot accurately assess the subjective quality of the enhanced UHD low-light images. For example, RUAS-LOL suffers from obvious over-exposure in the result while it is the best performer under the NIMA metric.
In summary, the performance of existing released models is unsatisfactory when they are used to enhance the UHD low-light images. The darkness, noise, and artifacts still exist in the results. Compared with luminance enhancement, noise is the more significant challenge for these methods. No method can handle the noise issue well. The joint task of luminance enhancement and noise removal raises a new challenge for LLIE, especially under limited computational resources. We also observe a gap between visual results and the scores of non-reference metrics for UHD LLIE. The gap calls for more specialized non-reference metrics for UHD LLIE.
4.2 COMPARING RETRAINED MODELS
Besides the released models, we also retrain existing methods on our UHD-LL training data and compare their performance with our method. Due to the limited space, we only compare our method with several good performers. More results can be found in the Appendix. As shown in Figure 7, our UHDFour produces a clear and normal-light result close to the ground truth. In comparison, ZeroDCE++, RUAS, Afifi et al., SCI, and Restormer experience color deviations. Zero-DCE, ZeroDCE++, RUAS, Zhao et al., Afifi et al., and SCI cannot remove the noise due to the limitations of their network designs. These methods mainly focus on luminance enhancement. SNR-Aware, Uformer, and Restormer have strong modeling capability because of the use of Transformer structures. However, the three methods still leave noise on the results and introduce artifacts.
The quantitative comparison is presented in Table 3. Our UHDFour achieves state-of-the-art performance in terms of PSNR, SSIM, and LPIPS scores and outperforms the compared methods with a
large margin. The Transformer-based SNR-Aware and Restormer rank the second best. Our method has the fastest processing speed for UHD images as most computation is conducted in the LR space.
To further verify the effectiveness of our network, we compare our approach with several methods, including Retinex-Net Wei et al. (2018), Zero-DCE (Guo et al., 2020), AGLLNet (Lv et al., 2021), Zhao et al. (Zhao et al., 2021), RUAS (Liu et al., 2021b), SCI (Ma et al., 2022), and URetinex-Net (Wu et al., 2022), that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets (Wei et al., 2018). Due to the mild noise and low-resolution images in
the LOL-v1 and LOL-v2 datasets, we change the 8× downsample and upsample operations to 2× and retrain our network. And such characteristics of LOL-v1 and LOL-v2 datasets prohibit us from showing the full potential of our method in removing noise and processing high-resolution images. Even though our goal is not to pursue state-of-the-art performance on the LOL-v1 and LOL-v2 datasets, our method achieves satisfactory performance as presented in Table 4. The visual results are provided in the Appendix.
4.3 ABLATION STUDY
We present ablation studies to demonstrate the effectiveness of the main components in our design. For the FouSpa Block, we remove the Fourier branch (FB) (#1), remove the Spatial branch (SB) (#2), and replace the FouSpa Block (i.e.., without FB and SB) with the Residual Block of comparable parameters (#3). We also replace the FB with the SB (i.e.., using two SB) (#4). For the Adjustment Block, we remove the Amplitude Modulation (AM) (#5), remove the Phase Guidance (PG) (#6), remove the SB (#7), and remove both AM and PG (#8). We also replace the AM and PG with two SB (#9), and
replace the Adjustment Block with the Residual Block of comparable parameters (#10). For the final output, we remove the concatenation of the LR normal-clear result (ŷ8), indicated as #11. We also replace all FB with SB, indicated as #12. Unless otherwise stated, all training settings remain unchanged as the implementation of full model, denoted as #13.
The quantitative comparison of the ablated models on the UHD-LL testing set is presented in Table 5. We also show the visual comparison of some ablated models in the Appendix. As shown, all the key designs contribute to the best performance of the full model. Without the Fourier branch (#1), the quantitative scores significantly drop. The result suggests that processing amplitude and phase separately improves the performance of luminance enhancement and noise removal. From the results of #2, the Spatial branch also boosts the performance. However, replacing the FouSpa Block with the Residual Block (#3) cannot achieve comparable performance with the full model (#13), indicating the effectiveness of the FouSpa Block. For the Adjustment Block, the Amplitude Modulation (#5), Phase Guidance (#6), and Spatial branch (#7) jointly verify its effectiveness. Such a block cannot be replaced by a Residual Block (#10). From the results of #11, we can see that it is necessary to estimate the LR result. In addition, replacing the Fourier branch with spatial branch (#4,#9,#12) cannot achieve comparable performance with the full model (#13), showing the efficacy of Fourier branch.
5 CONCLUSION
The success of our method is inspired by the characteristics of real low-light and noisy images in the Fourier domain. Thanks to the unique design of our network that handles luminance and noises in the Fourier domain, it outperforms state-of-the-art methods in UHD LLIE with appealing efficiency. With the contribution of the first real UHD LLIE dataset, it becomes possible to compare existing methods with real UHD low-light images. Our experiments are limited to image enhancement; we have not provided data and benchmarks in the video domain. Our exploration has not considered adversarial losses due to memory constraints. Moreover, as our data is saved in sRGB format, the models trained on our data may fail in processing the extreme cases, in which the information is lost due to the limited bit depth. HDR data may be suitable for these cases. Nevertheless, we believe our method and the dataset can bring new opportunities and challenges to the community. The usefulness of Fourier operations may go beyond our work and see potential in areas like image decomposition and disentanglement. With improved efficiency, it may be adopted for applications that demand real-time response, e.g.., enhancing the perception of autonomous vehicles in the dark.
6 ETHICS STATEMENT
This study focuses on low-light image enhancement and does not involve any ethics issues. The dataset proposed in this paper also does not involve privacy issues.
7 ACKNOWLEDGEMENT
This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also partially supported by the NTU NAP grant. Chun-Le Guo is also supported by MindSpore, CANN, and Ascend AI Processor.
A RELATED WORK
Low-Light Image Enhancement Methods. The focus of our work is on deep learning-based LLIE methods (Li et al., 2021a; Liu et al., 2021a). Wang et al. (2019) proposed a network to enhance the underexposed photos by estimating an image-to-illumination mapping. EnlightenGAN (Jiang et al., 2021) proposed an attention-guided network to make the generated results indistinguishable from real normal-light images. By formulating light enhancement as a task of image-specific curve estimation that can be trained with non-reference losses, Zero-DCE (Guo et al., 2020) and Zero-DCE++ (Li et al., 2021b) obtain good brightness enhancement. Zhao et al. (2021) treated underexposed image enhancement as image feature transformation between the underexposed image and its paired enhanced version. Liu et al. (2021b) proposed a Retinex model-inspired unrolling method, in which the network structure is obtained by neural architecture search. Afifi et al. (2021) proposed a coarseto-fine network for exposure correction. Ma et al. (2022) proposed a self-calibrated illumination learning framework using unsupervised losses. Wu et al. (2022) combined the Retinex model with a deep unfolding network, which unfolds an optimization problem into a learnable network. Xu et al. (2022) proposed to exploit the Signal-to-Noise Ratio (SNR)-aware Transformer and convolutional models for LLIE. In this method, long-range attention is used for the low SNR regions while the short-range attention (convolutional layers) is for other regions.
Different from these works, our network takes the challenging joint luminance enhancement, noise removal, and high resolution constraint of UHD low-light images into account in the Fourier domain, endowing new insights on UHD LLIE and achieving better performance.
Low-Light Image Enhancement Datasets. LOL (Wei et al., 2018) dataset contains pairs of low/normal-light images saved in RGB format, in which the low-light images are collected by changing the exposure time and ISO. Due to the small size, it only covers a small fraction of the noise and darkness levels. MIT-Adobe FiveK (Bychkovsky et al., 2011) dataset includes paired low-/highquality images, where the high-quality images are retouched by five experts. The low-quality images are treated as low-light images in some LLIE methods. However, this dataset is originally collected for global tone adjustment, and thus, it ignores noise in its collection. Based on the MIT-Adobe FiveK dataset, a multi-exposure dataset, Exposure-Errors, is rendered to emulate a wide range of exposure errors (Afifi et al., 2021). Similar to the MIT-Adobe FiveK dataset, the Exposure-Errors dataset also neglects the noise issue. SID (Chen et al., 2018) is a RAW data dataset.
The images of the SID dataset have two different sensor patterns (i.e.., Bayer pattern and APS-C X-Trans pattern). Due to the specific data pattern, the deep models trained on this dataset are not versatile as they require the Raw data with the same pattern as input. Besides, a long-exposure reference image corresponds to multiple short-exposure images, leading to limited scene diversity.
Unlike existing LLIE datasets that either omit noise, capture limited numbers of images, require specific sensor patterns, or exclude UHD images, we propose a real UHD LLIE dataset that contains low-noise/normal-clear image pairs with diverse darkness and noise levels captured in different scenarios. A comprehensive comparison is presented in Sec. 3.
Image Decomposition-based Enhancement. There are some image decomposition-based enhancement methods. For LLIE, Xu et al. (2020) proposed a frequency-based decomposition-andenhancement model, which suppresses noises in the low-frequency layer and enhances the details in the high-frequency layer. Yang et al. (2020a) proposed a band representation-based semi-supervised model. This method consists of two stages: recursive band learning and band recomposition. Wei et al. (2018) also decomposed a low-light image into an illumination component and a reflectance component according to the Retinex model and then separately enhance the components. Image decomposition was also used in shadow removal, in which two networks are used to predict shadow parameters and matter layer (Le & Samaras, 2019).
In addition, assuming that phase preserves high-level semantics while the amplitude contains lowlevel features, (Guo et al., 2022) proposed a FPNet that consists of two stages for image de-raining. The first state is to restore the amplitude of rainy images. The second state then refines the phase of
the restored rainy images. To solve the limitations of ResBlock that may overlook the low-frequency information and fails to model the long-distance information, (Mao et al., 2021) proposed a Residual Fast Fourier Transform with Convolution Block for image deblurring. The Block contains a spatial residual stream and a FFT stream. (Pham et al., 2021) proposed a complex valued neural network with Fourier transform for image denoising. The complex valued network first converts the noisy image to complex value via Fourier transform, then estimates a complex filter which is applied to the converted noisy image for approaching the complex value of the ground truth image.
Although these methods decompose an image or use Fourier transform in the networks, our design has different motivations. Our motivations are inspired by the uniqueness of the Fourier domain for UHD low-light image enhancement as presented in Figure 2, i.e., luminance and noise can be ‘decomposed’ to a certain extent in the Fourier domain and HR image and its LR versions share similar amplitude patterns. We also design the specific blocks to use these characteristics and solve the high-resolution constraint issue, which have not been explored in previous works.
HDR Imaging. There are some high dynamic range (HDR) reconstruction works (Salih et al., 2012; Wang & Yoon, 2021) that are related to our low-light image enhancement. The HDR reconstruction also can be grouped into multi-image HDR reconstruction and single-image reconstruction. The multi-image methods require to fuse the multiple bracketed exposure low dynamic range (LDR) images. To mitigate artifacts caused by image fusion, several technologies (Srikantha & Sidibe, 2012) have been proposed. For single-image methods, deep learning has achieved impressive performance. In addition to learning end-to-end LDR-to-HDR networks (Kalantari & Ramamoorthi, 2017a; Wu et al., 2018; Yang et al., 2018; Zhang & Lalonde, 2017; Hu et al., 2022b; Eilertsen et al., 2017) some methods either synthesize multiple LDR images with different exposures (Endo et al., 2017) or model the inverse process of the image formation pipeline (Liu et al., 2020). Besides, some works also focus on HDR reconstruction and denoising. For example, (Hu et al., 2022a) proposes a joint multi-scale denoising and tone-mapping framework, which prevents the tone-mapping operator from overwhelming the denoising operator. (Chen et al., 2021b) takes the noise and quantization into consideration for designing the HDR reconstruction network.
B ANALYSIS OF UHD-LL DATASET
We first show the intensity histograms and the SNR distribution of our UHD-LL dataset in Figure 8. The SNR is computed using the same algorithm as the recent LLIE method (Xu et al., 2022). As shown in Figure 8(a) and Figure 8(b), when splitting the training and testing sets, we make the pixel intensity distributions of the training and testing sets consistent to guarantee the rationality of the dataset split. We also plot the SNR distribution of the dataset to show the noise levels in Figure 8(c). The SNR distribution suggests the wide and challenging SNR ranges of our dataset. We show more samples and amplify the resolution of our UHD-LL data in Figure 9.
C FURTHER ANALYSIS OF MOTIVATION
Recall that in Sec. 2.1, we discussed two observations that serve as the motivation to design our network. In particular, (a) Swapping the amplitude of a low-noise image with that of its corresponding normal-clear image produces a normal-noise image and a low-clear image, and (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart.
We first show more motivation cases in Figures 10, 11, and 12. These visual results suggest the same tendency as the motivations shown in the main paper.
low-noise
normal-clear low-clear
normal-noise
Amplitude
Amplitude
Phase
Phase
(a)
real
real
compositional
compositional Amplitude Amplitude
normal-clear low-noisenormal-clear normal-clear
4X downsample 8X downsample
(b)
full resolution full resolution
Amplitude Amplitude
To further analyze our first motivation, we compare the luminance and noise of real normal-clear and low-noise images and compositional low-clear and normal-noise images. To compare the luminance similarity, we compute the average luminance. For real noise level measurement, there is no corresponding metric. Thus, we use the recent multi-scale Transformer MUSIQ (K et al., 2021) for image quality assessment. MUSIQ is not sensitive to luminance changes. Moreover, it can be used to measure the noise level as its training dataset contains noisy data and it shows state-of-the-art performance for assessing the quality of natural images. A large MUSIQ value reflects better image quality with less noise and artifacts. We select 50 images from the UHD-LL dataset and compare the average scores. We present the quantitative results in Table 6.
As presented in Table 6, the real normal-clear images have similar luminance values with the compositional normal-noise images while they have similar high MUSIQ values with the compositional low-clear images. Similarly, the real low-noise images have similar luminance values with the compositional low-clear images while they have similar low MUSIQ values with the compositional normal-noise images. The results further suggest that luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase.
For the second motivation, it is difficult to quantify the similarity of amplitude spectrum of different sizes as it cannot be directly interpolated. The full-reference metrics cannot be used in this situation. Hence, we show more visual examples in Figure 10(b), Figure 11(b), and Figure 12(b). The extra examples support our motivation.
D VISUALIZATION IN THE NETWORK
We show the changes of amplitude and phase in our proposed UHDFour network in Figure 13. As shown, the amplitude and phase of our final result are similar with those of ground truth. Moreover, the amplitude and phase of the low-resolution output ŷ8 are also similar with those of its corresponding ground truth y8. We wish to emphasize that although noise is related to phase, it cannot be explicitly represented in phase imagery format as phase represents the initial position of the wave. Only the combination of amplitude and phase can express a complete image. Moreover, in the feature domain, such relevance is more difficult to be represented in an imagery format. Thus, we suggest to see the similarity between the final result and ground truth, instead of the intermediate features and phase.
E MORE RESULTS ON RELEASED MODELS
We present more comparisons between state of the arts for restoring UHD low-light images in Figures 14, 15, and 16. This is similar to Figure 6 of the main paper where we compare methods using their original released models. As shown, all existing models cannot handle the UHD low-light images well. Since we cannot infer the full-resolution results of SNR-Aware Xu et al. (2022) on UHD images, despite using a GPU with 48G memory, some obvious borders appear in its results due to the stitching strategy. The phenomenon also indicates the commonly-used stitching strategy in previous UHD data processing methods is inapplicable to the challenging UHD low-light image enhancement task.
F MORE RESULTS ON RETRAINED MODELS ON UHD-LL
We provide more visual comparisons of our method with retrained state-of-the-art methods on the UHD-LL dataset in Figures 17 and 18.
As the results shown, for UHD low-light image enhancement, the retrained models on the UHD-LL dataset still cannot achieve satisfactory results. Noise and artifacts can still be found in their results. The results suggest that joint luminance enhancement and noise removal in the spatial domain is difficult. Our solution effectively handles this challenging problem by embedding Fourier transform in a cascaded network, in which luminance and noise can be decomposed to a certain extent and are processed separately.
G MORE RESULTS ON RETRAINED MODELS ON LOL-V1 AND LOL-V2 DATASETS
We also provide more visual comparisons of our method with the models that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets in Figures 19, 20, and 21.
As for the low-light images in LOL-v1 and LOL-v2 datasets, even though the mild noise and lowresolution images prohibit us from showing the full potential of our method in removing noise and processing high-resolution images, our method still achieves satisfactory performance. The results suggest the potential of our solution in different circumstances.
H ABLATION STUDY
We show some visual results of the ablated models in Figure 22. Without the Fourier branch (#1), the ablated model cannot effectively enhance luminance and remove noise. Although the result of
#2 looks better than #1, the Spatial branch (#2) also affects the final result. Directly replacing the FouSpa Block with the Residual Block of comparable parameters (#3) cannot obtain a satisfactory result, suggesting the good performance of the FouSpa Block is not because of the use of more parameters. Removing the Amplitude Modulation (#5) results in the visually unpleasing result. The Phase Guidance (#6) and Spatial branch (#7) in the Adjustment Block also contribute to the good performance of the full model. Directly replacing the Adjustment Block with the Residual Block of comparable parameters (#10) still cannot obtain a satisfactory result. The introduction of the estimation of low-resolution result (#11) also leads to a clear result. The visual comparisons further show the significance of the proposed FouSpa Block and Adjustment Block in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient.
Additionally, we list the computational and time costs of FFT/IFFT operations for processing different scales of features. In our network, we fix the feature channels to 16 and use three different scales 8×, 16×, and 32× downsampled original resolution (i.e., 3840 × 2160). The results are presented in Table 7. The FFT and IFFT operations have same computational cost. The difference in running time may be because of the different optimization strategies used in PyTorch. | 1. What is the focus of the paper regarding UHD enhancement?
2. What are the strengths of the proposed approach, particularly in terms of feature modeling and dataset creation?
3. What are the weaknesses of the paper, especially regarding the confusion with Table 2 and the absence of amplitude and phase illustrations?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper propsoe the UHDFour solution for UHD enhancement. This paper introduce a new dataset for 4K UHD, called UHD LLIE. Also this paper designed a module utilize the frequency domain.
Strengths And Weaknesses
Strength:
This paper is well written and easy to understand.
The performance is great.
The novelty is enough for ICLR. The feature modeling in frequency domain and the dataset.
Weakness.
I'm little confused about table 2. Why the performance of proposed method is not shown in Table 2?
I suggest authors also show the Amplitude and Phase in the network. People can see how the network improve the Amplitude and Phase.
Clarity, Quality, Novelty And Reproducibility
I think the novelty of this paper is enough and the paper is complete and easy to understand. |
ICLR | Title
Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement
Abstract
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
N/A
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
1 INTRODUCTION
With the advent of advanced imaging sensors and displays, Ultra-High-Definition (UHD) imaging has witnessed rapid development in recent years. While UHD imaging offers broad applications and makes a significant difference in picture quality, the extra pixels also challenge the efficiency of existing image processing algorithms.
In this study, we focus on one of the most challenging tasks in image restoration, namely low-light image enhancement (LLIE), where one needs to jointly enhance the luminance and remove inherent noises caused by sensors and dim environments. Further to these challenges, we lift the difficulty by demanding efficient processing in the UHD regime.
Despite the remarkable progress in low-light image enhancement (LLIE) (Li et al., 2021a), existing methods (Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022), as shown in Figure 1, show apparent drawbacks when they are used to process real-world UHD low-light images. This is because (1) most methods (Guo et al., 2020; Liu et al., 2021b; Ma et al., 2022) only focus on luminance enhancement and fail in removing noise; (2) some approaches (Wu et al., 2022; Xu et al., 2022) simultaneously enhance luminance and remove noise in the spatial domain, resulting in the suboptimal enhancement; and (3) existing methods (Wei et al., 2018; Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022) are mainly trained on low-resolution (LR) data, leading to the incompatibility with high-resolution (HR) inputs; and (4) some studies (Xu et al., 2022; Zamir et al., 2022) adopt heavy structures, thus being inefficient for processing UHD images. More discussion on related work is provided in the Appendix.
To overcome the challenges aforementioned, we present a new idea for performing LLIE in the Fourier Domain. Our approach differs significantly from existing solutions that process images in the spatial domain. In particular, our method, named as UHDFour, is motivated by our observation of two interesting phenomena in the Fourier domain of low-light noisy images: i) luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase, and ii) the amplitude patterns of images of different resolutions are similar. These observations inspire the design of our network, which handles luminance and noise separately in the Fourier domain. This design is advantageous as it avoids amplifying noise when enhancing luminance, a common issue encountered in existing spatial domain-based methods. In addition, the fact that amplitude patterns of images of different resolutions are similar motivates us to save computation by first processing in the low-resolution regime and performing essential adjustments only in the high-resolution scale.
We also contribute the first benchmark for UHD LLIE. The dataset, named UHD-LL, contains 2,150 low-noise/normal-clear 4K UHD image pairs with diverse darkness and noise levels captured in different scenarios. Unlike existing datasets (Wei et al., 2018; Lv et al., 2021; Bychkovsky et al., 2011) that either synthesize or retouch low-light images to obtain the paired input and target sets, we capture real image pairs. During data acquisition, special care is implemented to minimize geometric and photometric misalignment due to camera shake and dynamic environment. With the new UHD-LL dataset, we design a series of quantitative and quantitative benchmarks to analyze the performance of existing LLIE methods and demonstrate the effectiveness of our method.
Our contributions are summarized as follows: (1) We propose a new solution for UHD LLIE that is inspired by unique characteristics observed in the Fourier domain. In comparison to existing LLIE methods, the proposed framework shows exceptional effectiveness and efficiency in addressing the joint task of luminance enhancement and noise removal in the UHD regime. (2) We contribute the first UHD LLIE dataset, which contains 2,150 pairs of 4K UHD low-noise/normal-clear data, covering diverse noise and darkness levels and scenes. (3) We conduct a systematical analysis of existing LLIE methods on UHD data.
2 OUR APPROACH
In this section, we first discuss our observations in analyzing low-light images in the Fourier domain, and then present the proposed solution.
2.1 OBSERVATIONS IN FOURIER DOMAIN
Here we provide more details to supplement the observations we highlighted in Sec. 1. We analyze real UHD low-light images in the Fourier domain and provide a concise illustration in Figure 2. Specifically, (a) Swapping the amplitude of a low-light and noisy (low-noise) image with that of its corresponding normal-light and clear (normal-clear) image produces a normal-light and noisy (normal-noise) image and a low-light and clear (low-clear) image. We show more examples in the Appendix. The result suggests that the luminance and noise can be decomposed to a certain extent in the Fourier domain. In particular, most luminance information is expressed as amplitudes, and
noises are revealed in phases. This inspires us to process luminance and noise separately in the Fourier domain. (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart. Such a characteristic offers us the possibility to first enhance the amplitude of an LR scale with more computations and then only make minor adjustments in the HR scale. In this way, most computations can be conducted in the LR space, reducing the computational complexity.
2.2 THE UHDFOUR NETWORK
C
FF T
C on
v
IF
FT
IN
Spatial branch
Fourier branch
FouSpa Block A
P
guidance
FF T
C on
v
IF FT
IN
C
Spatial branch
Fourier branch
��
A
P �����
Adjustment Block modulation
Overview. UHDFour aims to map an UHD low-noise input image x ∈ RH×W×C to its corresponding normal-clear version y ∈ RH×W×C , where H , W , and C represent height, width, and channel, respectively. Figure 3 shows the overview of UHDFour. It consists of an LRNet and an HRNet.
Motivated by the observation in Sec. 2.1, LRNet takes the most computation of the whole network. Its input is first embedded into the feature domain by a Conv layer. To reduce computational complexity, we downsample the features to 1/8 of the original resolution by bilinear interpolation. Then, the LR features go through an encoder-decoder network, which contains four FouSpa Blocks with two 2×downsample and two 2×upsample operations, obtaining outputs features. The outputs features are respectively fed to FFT to obtain the refined amplitude Ar and phase Pr features and a Conv layer to estimate the LR normal-clear image ŷ8 ∈ RH/8×W/8×C . The outputs of LRNet coupled with the input are fed to the HRNet. Specifically, the input x is first reshaped to xpu ∈ RH×W×C×64 via PixelUnshuffle (8× ↓) to preserve original information, and then fed to an Adjustment Block. With the refined amplitude Ar and phase Pr features, the Adjustment Block produces adjusted features that are reshaped to the original height and width of input x via Pixelshuffle (8× ↑). Finally, we resize the estimated LR normal-clear image ŷ8 to the original size of input x via bilinear interpolation and combine it with the upsampled features to estimate the final HR normal-clear image ŷ. We detail the key components as follows.
FouSpa Block. In Sec. 2.1, we observe that luminance and noise can be decomposed in the Fourier domain. Hence, we design the FouSpa Block to parallelly implement amplitude and phase enhancement in the Fourier domain and feature enhancement in the spatial domain. As shown in Figure 4(a), the input features are forked into the Fourier and Spatial branches. In the Fourier branch, FFT is first used to obtain the amplitude component (A) and phase component (P ). The two components are separately fed to two Conv layers with 1×1 kernel. Note that when processing amplitude and phase, we only use 1×1 kernel to avoid damaging the structure information. Then, we transform them back to the spatial domain via IFFT and concatenate them with the spatial features enhanced by a Half Instance Normalization (HIN) unit (Chen et al., 2021a). We adopt the HIN unit based on its efficiency. The concatenated features are further fed to a Conv layer and then combined with the input features in a residual manner. Although our main motivations are in the Fourier domain, the use of the spatial branch is necessary. This is because the spatial branch and Fourier branch are complementary. The spatial branch adopts convolution operations that can model the structure dependency well in spatial domain. The Fourier branch can attend global information and benefit the disentanglement of energy and degradation.
Adjustment Block. The Adjustment Block is the main structure of the HRNet, and it is lightweight. As shown in Figure 4(b), the Adjustment Block shares a similar structure with the FouSpa Block. Differently, in the Fourier branch, with the refined amplitude Ar features obtained from the LRNet, we use Spatial Feature Transform (SFT) (Wang et al., 2018) to modulate the amplitude features of the input xpu via simple affine transformation. Such a transformation or adjustment is possible because the luminance, as global information, manifests as amplitude components, and the amplitude patterns of an HR scale and its LR scales are similar (as discussed in Sec. 2.1). Note that we cannot modulate the phase because of its periodicity. Besides, we do not find an explicit relationship between the HR scale’s phase and its LR scales. However, we empirically find that concatenating the refined phase Pr features achieved from the LRNet with the phase features of the input xpu improves the final performance. We thus apply such concatenation in our solution.
Losses. We use l1 to supervise ŷ8 and ŷ. We also add perceptual loss to supervise ŷ8 while the use of perceptual loss on ŷ is impracticable because of its high resolution. Instead, we add SSIM loss Lssim on ŷ. The final loss L is the combination of these losses:
L = ∥ŷ−y∥1+0.0004×Lssim(ŷ, y)+0.1×∥ŷ8−y8∥1+0.0002×∥VGG(ŷ8)−VGG(y8)∥2, (1)
where y is the ground truth, y8 is the 8× downsampled version of y, VGG is the pre-trained VGG19 network, in which we use four scales to supervise training Zhou et al. (2022).
3 UHD-LL DATASET
We collect a real low-noise/normal-clear paired image dataset that contains 2,150 pairs of 4K UHD data saved in 8bit sRGB format. Several samples are shown in Figure 5.
Images are collected from a camera mounted on a tripod to ensure stability. Two cameras, i.e.., a Sony α7 III camera and a Sony Alpha a6300 camera, are used to offer diversity. The ground truth (or normal-clear) image is captured with a small ISO ∈ [100, 800] in a bright scene (indoor or outdoor). The corresponding low-noise image is acquired by increasing the ISO ∈ [1000, 20000] and reducing the exposure time. Due to the constraints of exposure gears in the cameras, shooting in the large ISO range may produce bright images, which opposes the purpose of capturing low-light and noisy
images. Thus, in some cases, we put a neutral-density (ND) filter with different ratios on the camera lens to capture low-noise images. In this way, we can increase the ISO to generate heavier noises and simultaneously obtain extremely dark images, enriching the diversity of darkness and noise levels.
The main challenge of collecting paired data is to reduce misalignment caused by camera shakes and dynamic objects. We take several measures to ameliorate the issue. Apart from using a tripod, we also use remote control software (Imaging Edge) to adjust the exposure time and ISO value to avoid any physical contact with the camera. To further reduce subtle misalignments, we adopt an image alignment algorithm (Evangelidis & Psarakis, 2008) to estimate the affine matrix and align the low-light image and its ground truth. We improve the alignment method by applying AdaIN (Huang & Belongie, 2017) before the affine matrix estimation to reduce the intensity gap. Finally, we hire annotators to check all paired images carefully and discard those that still exhibit misalignments.
We split the UHD-LL dataset into two parts: 2,000 pairs for training and 115 pairs for testing. The training and test partitions are exclusive in their scene and data. We also ensure consistency in pixel intensity distribution between the training and test splits. More analysis of this data, e.g.., the pixel intensity and Signal-to-Noise Ratio (SNR) distributions, can be found in the Appendix.
A comparison between our UHD-LL dataset and existing paired low-light image datasets is presented in Table 1. The LOL dataset (two versions: LOL-v1: 500 images; LOL-v2: 789 images) is most related to our UHD-LL dataset as both focus on real low-light images with noise. The LOL-v2 contains all images of the LOL-v1. In contrast to the LOL dataset, our dataset features a more extensive collection, where diverse darkness and noise levels from rich types of scenes are considered. Moreover, the images of our dataset have higher resolutions than those from the LOL dataset. As shown in Figure 1, the models pre-trained on the LOL dataset cannot handle the cases in our UHDLL dataset due to its insufficient training data, which are low-resolution and contains mostly mild noises. Different from SID dataset that focuses on RAW data, our dataset only studies the data with RGB format. The images in the SID dataset are captured in extremely dark scenes. Its diversity of darkness levels and scenes is limited. When these RAW data with extremely low intensity are transformed into sRGB images, some information would be truncated due to the bit depth constraints of 8bit sRGB image. In this case, it is challenging to train a network for effectively mapping noise and low-light images to clear and normal-light images using these sRGB images as training data.
4 EXPERIMENTS
Implementation. We implement our method with PyTorch and train it on six NVIDIA Tesla V100 GPUs. We use an ADAM optimizer for network optimization. The learning rate is set to 0.0001. A batch size of 6 is applied. We fix the channels of each Conv layer to 16, except for the Conv layers associated with outputs. We use the Conv layer with stride = 2 and 4×4 kernels to implement the 2× downsample operation in the encoder and interpolation to implement the 2× upsample operation in the decoder in the LRNet. Unless otherwise stated, the Conv layer uses stride = 1 and 3×3 kernels. We use the training data in the UHD-LL dataset to train our model. Images are randomly cropped into patches of size 512 × 512 for training. Compared Methods. We include 14 state-of-the-art methods (21 models in total) for our benchmarking study and performance comparison. These methods includes 12 light enhancement methods: NPE (TIP’13) (Wang et al., 2013) SRIE (CVPR’16) (Fu et al., 2016), DRBN (CVPR’20) (Yang et al., 2020a), Zero-DCE (CVPR’20) (Guo et al., 2020), Zero-DCE++ (TPAMI’21) (Li et al., 2021b), RUAS (CVPR’21) (Liu et al., 2021b), Zhao et al. (ICCV’21) (Zhao et al., 2021), Enlighten-
GAN (TIP’21) (Jiang et al., 2021), Afifi et al. (CVPR’21) (Afifi et al., 2021), SCI (CVPR’22) (Ma et al., 2022), SNR-Aware (CVPR’22) (Xu et al., 2022), URetinex-Net (CVPR’22) (Wu et al., 2022) and 2 Transformers: Uformer (CVPR’22) (Wang et al., 2022) and Restormer (CVPR’22) (Zamir et al., 2022). We use their released models and also retrain them using the same training data as our method. Note that some methods provide different models trained using different datasets. Due to the heavy models used in Restormer (Zamir et al., 2022) and SNR-Aware (Xu et al., 2022), we cannot infer the full-resolution results of both methods on UHD images, despite using a GPU with 48G memory. Following previous UHD study (Zheng et al., 2021), we resort to two strategies for this situation: (1) We downsample the input to the largest size that the model can handle and then resize the result to the original resolution, denoted by the subscript ‘resize’. (2) We split the input into four patches without overlapping and then stitch the result, denoted by the subscript ‘stitch’.
Evaluation Metrics. We employ full-reference image quality assessment metrics PSNR, SSIM (Wang et al., 2004), and LPIPS (Alex version) (Zhang et al., 2018) to quantify the performance of different methods. We also adopt the non-reference image quality evaluator (NIQE) (Mittal et al., 2013) and the multi-scale image quality Transformer (MUSIQ) (trained on KonIQ-10k dataset) (Ke et al., 2021) for assessing the restoration quality. We notice that the quantitative results reported by different papers diverge. For a fair comparison, we adopt the commonly-used IQA PyTorch Toolbox1 to compute the quantitative results of all compared methods. We also test the trainable parameters and running time for processing UHD 4K data.
4.1 BENCHMARKING EXISTING MODELS
To validate the performance of existing LLIE methods that were trained using their original training data, we directly use the released models for evaluation on the UHD low-light images. These original training datasets include LOL (Wei et al., 2018), MIT-Adobe-FiveK (Bychkovsky et al., 2011), Exposure-Errors (Afifi et al., 2021), SICE (Cai et al., 2018), LSRW (Hai et al., 2024), and DarkFace (Yang et al., 2020b). EnlightenGAN uses the assemble training data from existing datasets (Wei et al., 2018; Dang-Nguyen et al., 2015; Kalantari & Ramamoorthi, 2017b; Cai et al., 2018). In addition, the LOL-v1 and LOL-v2 contain real low-light images while LOL-syn is a synthetic dataset. Due to the limited space, we only show relatively good results. As shown in Figure 6, all methods can improve the luminance of the input image. However, they fail to produce visually pleasing results. DRBN and EnlightenGAN introduce artifacts. RUAS-LOL and RUAS-DarkFace yield over-exposed results. Color deviation is observed in the results of EnlightenGAN and Afifi et al. All methods cannot handle the noise well and even amplify noise.
We also summarize the quantitative performance of different methods and verify the effectiveness of commonly used non-reference metrics for UHD low-light images in Table 2. URetinex-Net achieves the highest PSNR score while SNR-Aware-LOLv1 is the best performer in terms of SSIM and
1https://github.com/chaofengc/IQA-PyTorch
LPIPS. For non-reference metrics, SCI-difficult, Zhao et al.-LOL, and RUAS-LOL are the winners under MUSIQ, NIQE, and NIMA, respectively. From Figure 6 and Table 2, we found the non-reference metrics designed for generic image quality assessment cannot accurately assess the subjective quality of the enhanced UHD low-light images. For example, RUAS-LOL suffers from obvious over-exposure in the result while it is the best performer under the NIMA metric.
In summary, the performance of existing released models is unsatisfactory when they are used to enhance the UHD low-light images. The darkness, noise, and artifacts still exist in the results. Compared with luminance enhancement, noise is the more significant challenge for these methods. No method can handle the noise issue well. The joint task of luminance enhancement and noise removal raises a new challenge for LLIE, especially under limited computational resources. We also observe a gap between visual results and the scores of non-reference metrics for UHD LLIE. The gap calls for more specialized non-reference metrics for UHD LLIE.
4.2 COMPARING RETRAINED MODELS
Besides the released models, we also retrain existing methods on our UHD-LL training data and compare their performance with our method. Due to the limited space, we only compare our method with several good performers. More results can be found in the Appendix. As shown in Figure 7, our UHDFour produces a clear and normal-light result close to the ground truth. In comparison, ZeroDCE++, RUAS, Afifi et al., SCI, and Restormer experience color deviations. Zero-DCE, ZeroDCE++, RUAS, Zhao et al., Afifi et al., and SCI cannot remove the noise due to the limitations of their network designs. These methods mainly focus on luminance enhancement. SNR-Aware, Uformer, and Restormer have strong modeling capability because of the use of Transformer structures. However, the three methods still leave noise on the results and introduce artifacts.
The quantitative comparison is presented in Table 3. Our UHDFour achieves state-of-the-art performance in terms of PSNR, SSIM, and LPIPS scores and outperforms the compared methods with a
large margin. The Transformer-based SNR-Aware and Restormer rank the second best. Our method has the fastest processing speed for UHD images as most computation is conducted in the LR space.
To further verify the effectiveness of our network, we compare our approach with several methods, including Retinex-Net Wei et al. (2018), Zero-DCE (Guo et al., 2020), AGLLNet (Lv et al., 2021), Zhao et al. (Zhao et al., 2021), RUAS (Liu et al., 2021b), SCI (Ma et al., 2022), and URetinex-Net (Wu et al., 2022), that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets (Wei et al., 2018). Due to the mild noise and low-resolution images in
the LOL-v1 and LOL-v2 datasets, we change the 8× downsample and upsample operations to 2× and retrain our network. And such characteristics of LOL-v1 and LOL-v2 datasets prohibit us from showing the full potential of our method in removing noise and processing high-resolution images. Even though our goal is not to pursue state-of-the-art performance on the LOL-v1 and LOL-v2 datasets, our method achieves satisfactory performance as presented in Table 4. The visual results are provided in the Appendix.
4.3 ABLATION STUDY
We present ablation studies to demonstrate the effectiveness of the main components in our design. For the FouSpa Block, we remove the Fourier branch (FB) (#1), remove the Spatial branch (SB) (#2), and replace the FouSpa Block (i.e.., without FB and SB) with the Residual Block of comparable parameters (#3). We also replace the FB with the SB (i.e.., using two SB) (#4). For the Adjustment Block, we remove the Amplitude Modulation (AM) (#5), remove the Phase Guidance (PG) (#6), remove the SB (#7), and remove both AM and PG (#8). We also replace the AM and PG with two SB (#9), and
replace the Adjustment Block with the Residual Block of comparable parameters (#10). For the final output, we remove the concatenation of the LR normal-clear result (ŷ8), indicated as #11. We also replace all FB with SB, indicated as #12. Unless otherwise stated, all training settings remain unchanged as the implementation of full model, denoted as #13.
The quantitative comparison of the ablated models on the UHD-LL testing set is presented in Table 5. We also show the visual comparison of some ablated models in the Appendix. As shown, all the key designs contribute to the best performance of the full model. Without the Fourier branch (#1), the quantitative scores significantly drop. The result suggests that processing amplitude and phase separately improves the performance of luminance enhancement and noise removal. From the results of #2, the Spatial branch also boosts the performance. However, replacing the FouSpa Block with the Residual Block (#3) cannot achieve comparable performance with the full model (#13), indicating the effectiveness of the FouSpa Block. For the Adjustment Block, the Amplitude Modulation (#5), Phase Guidance (#6), and Spatial branch (#7) jointly verify its effectiveness. Such a block cannot be replaced by a Residual Block (#10). From the results of #11, we can see that it is necessary to estimate the LR result. In addition, replacing the Fourier branch with spatial branch (#4,#9,#12) cannot achieve comparable performance with the full model (#13), showing the efficacy of Fourier branch.
5 CONCLUSION
The success of our method is inspired by the characteristics of real low-light and noisy images in the Fourier domain. Thanks to the unique design of our network that handles luminance and noises in the Fourier domain, it outperforms state-of-the-art methods in UHD LLIE with appealing efficiency. With the contribution of the first real UHD LLIE dataset, it becomes possible to compare existing methods with real UHD low-light images. Our experiments are limited to image enhancement; we have not provided data and benchmarks in the video domain. Our exploration has not considered adversarial losses due to memory constraints. Moreover, as our data is saved in sRGB format, the models trained on our data may fail in processing the extreme cases, in which the information is lost due to the limited bit depth. HDR data may be suitable for these cases. Nevertheless, we believe our method and the dataset can bring new opportunities and challenges to the community. The usefulness of Fourier operations may go beyond our work and see potential in areas like image decomposition and disentanglement. With improved efficiency, it may be adopted for applications that demand real-time response, e.g.., enhancing the perception of autonomous vehicles in the dark.
6 ETHICS STATEMENT
This study focuses on low-light image enhancement and does not involve any ethics issues. The dataset proposed in this paper also does not involve privacy issues.
7 ACKNOWLEDGEMENT
This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also partially supported by the NTU NAP grant. Chun-Le Guo is also supported by MindSpore, CANN, and Ascend AI Processor.
A RELATED WORK
Low-Light Image Enhancement Methods. The focus of our work is on deep learning-based LLIE methods (Li et al., 2021a; Liu et al., 2021a). Wang et al. (2019) proposed a network to enhance the underexposed photos by estimating an image-to-illumination mapping. EnlightenGAN (Jiang et al., 2021) proposed an attention-guided network to make the generated results indistinguishable from real normal-light images. By formulating light enhancement as a task of image-specific curve estimation that can be trained with non-reference losses, Zero-DCE (Guo et al., 2020) and Zero-DCE++ (Li et al., 2021b) obtain good brightness enhancement. Zhao et al. (2021) treated underexposed image enhancement as image feature transformation between the underexposed image and its paired enhanced version. Liu et al. (2021b) proposed a Retinex model-inspired unrolling method, in which the network structure is obtained by neural architecture search. Afifi et al. (2021) proposed a coarseto-fine network for exposure correction. Ma et al. (2022) proposed a self-calibrated illumination learning framework using unsupervised losses. Wu et al. (2022) combined the Retinex model with a deep unfolding network, which unfolds an optimization problem into a learnable network. Xu et al. (2022) proposed to exploit the Signal-to-Noise Ratio (SNR)-aware Transformer and convolutional models for LLIE. In this method, long-range attention is used for the low SNR regions while the short-range attention (convolutional layers) is for other regions.
Different from these works, our network takes the challenging joint luminance enhancement, noise removal, and high resolution constraint of UHD low-light images into account in the Fourier domain, endowing new insights on UHD LLIE and achieving better performance.
Low-Light Image Enhancement Datasets. LOL (Wei et al., 2018) dataset contains pairs of low/normal-light images saved in RGB format, in which the low-light images are collected by changing the exposure time and ISO. Due to the small size, it only covers a small fraction of the noise and darkness levels. MIT-Adobe FiveK (Bychkovsky et al., 2011) dataset includes paired low-/highquality images, where the high-quality images are retouched by five experts. The low-quality images are treated as low-light images in some LLIE methods. However, this dataset is originally collected for global tone adjustment, and thus, it ignores noise in its collection. Based on the MIT-Adobe FiveK dataset, a multi-exposure dataset, Exposure-Errors, is rendered to emulate a wide range of exposure errors (Afifi et al., 2021). Similar to the MIT-Adobe FiveK dataset, the Exposure-Errors dataset also neglects the noise issue. SID (Chen et al., 2018) is a RAW data dataset.
The images of the SID dataset have two different sensor patterns (i.e.., Bayer pattern and APS-C X-Trans pattern). Due to the specific data pattern, the deep models trained on this dataset are not versatile as they require the Raw data with the same pattern as input. Besides, a long-exposure reference image corresponds to multiple short-exposure images, leading to limited scene diversity.
Unlike existing LLIE datasets that either omit noise, capture limited numbers of images, require specific sensor patterns, or exclude UHD images, we propose a real UHD LLIE dataset that contains low-noise/normal-clear image pairs with diverse darkness and noise levels captured in different scenarios. A comprehensive comparison is presented in Sec. 3.
Image Decomposition-based Enhancement. There are some image decomposition-based enhancement methods. For LLIE, Xu et al. (2020) proposed a frequency-based decomposition-andenhancement model, which suppresses noises in the low-frequency layer and enhances the details in the high-frequency layer. Yang et al. (2020a) proposed a band representation-based semi-supervised model. This method consists of two stages: recursive band learning and band recomposition. Wei et al. (2018) also decomposed a low-light image into an illumination component and a reflectance component according to the Retinex model and then separately enhance the components. Image decomposition was also used in shadow removal, in which two networks are used to predict shadow parameters and matter layer (Le & Samaras, 2019).
In addition, assuming that phase preserves high-level semantics while the amplitude contains lowlevel features, (Guo et al., 2022) proposed a FPNet that consists of two stages for image de-raining. The first state is to restore the amplitude of rainy images. The second state then refines the phase of
the restored rainy images. To solve the limitations of ResBlock that may overlook the low-frequency information and fails to model the long-distance information, (Mao et al., 2021) proposed a Residual Fast Fourier Transform with Convolution Block for image deblurring. The Block contains a spatial residual stream and a FFT stream. (Pham et al., 2021) proposed a complex valued neural network with Fourier transform for image denoising. The complex valued network first converts the noisy image to complex value via Fourier transform, then estimates a complex filter which is applied to the converted noisy image for approaching the complex value of the ground truth image.
Although these methods decompose an image or use Fourier transform in the networks, our design has different motivations. Our motivations are inspired by the uniqueness of the Fourier domain for UHD low-light image enhancement as presented in Figure 2, i.e., luminance and noise can be ‘decomposed’ to a certain extent in the Fourier domain and HR image and its LR versions share similar amplitude patterns. We also design the specific blocks to use these characteristics and solve the high-resolution constraint issue, which have not been explored in previous works.
HDR Imaging. There are some high dynamic range (HDR) reconstruction works (Salih et al., 2012; Wang & Yoon, 2021) that are related to our low-light image enhancement. The HDR reconstruction also can be grouped into multi-image HDR reconstruction and single-image reconstruction. The multi-image methods require to fuse the multiple bracketed exposure low dynamic range (LDR) images. To mitigate artifacts caused by image fusion, several technologies (Srikantha & Sidibe, 2012) have been proposed. For single-image methods, deep learning has achieved impressive performance. In addition to learning end-to-end LDR-to-HDR networks (Kalantari & Ramamoorthi, 2017a; Wu et al., 2018; Yang et al., 2018; Zhang & Lalonde, 2017; Hu et al., 2022b; Eilertsen et al., 2017) some methods either synthesize multiple LDR images with different exposures (Endo et al., 2017) or model the inverse process of the image formation pipeline (Liu et al., 2020). Besides, some works also focus on HDR reconstruction and denoising. For example, (Hu et al., 2022a) proposes a joint multi-scale denoising and tone-mapping framework, which prevents the tone-mapping operator from overwhelming the denoising operator. (Chen et al., 2021b) takes the noise and quantization into consideration for designing the HDR reconstruction network.
B ANALYSIS OF UHD-LL DATASET
We first show the intensity histograms and the SNR distribution of our UHD-LL dataset in Figure 8. The SNR is computed using the same algorithm as the recent LLIE method (Xu et al., 2022). As shown in Figure 8(a) and Figure 8(b), when splitting the training and testing sets, we make the pixel intensity distributions of the training and testing sets consistent to guarantee the rationality of the dataset split. We also plot the SNR distribution of the dataset to show the noise levels in Figure 8(c). The SNR distribution suggests the wide and challenging SNR ranges of our dataset. We show more samples and amplify the resolution of our UHD-LL data in Figure 9.
C FURTHER ANALYSIS OF MOTIVATION
Recall that in Sec. 2.1, we discussed two observations that serve as the motivation to design our network. In particular, (a) Swapping the amplitude of a low-noise image with that of its corresponding normal-clear image produces a normal-noise image and a low-clear image, and (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart.
We first show more motivation cases in Figures 10, 11, and 12. These visual results suggest the same tendency as the motivations shown in the main paper.
low-noise
normal-clear low-clear
normal-noise
Amplitude
Amplitude
Phase
Phase
(a)
real
real
compositional
compositional Amplitude Amplitude
normal-clear low-noisenormal-clear normal-clear
4X downsample 8X downsample
(b)
full resolution full resolution
Amplitude Amplitude
To further analyze our first motivation, we compare the luminance and noise of real normal-clear and low-noise images and compositional low-clear and normal-noise images. To compare the luminance similarity, we compute the average luminance. For real noise level measurement, there is no corresponding metric. Thus, we use the recent multi-scale Transformer MUSIQ (K et al., 2021) for image quality assessment. MUSIQ is not sensitive to luminance changes. Moreover, it can be used to measure the noise level as its training dataset contains noisy data and it shows state-of-the-art performance for assessing the quality of natural images. A large MUSIQ value reflects better image quality with less noise and artifacts. We select 50 images from the UHD-LL dataset and compare the average scores. We present the quantitative results in Table 6.
As presented in Table 6, the real normal-clear images have similar luminance values with the compositional normal-noise images while they have similar high MUSIQ values with the compositional low-clear images. Similarly, the real low-noise images have similar luminance values with the compositional low-clear images while they have similar low MUSIQ values with the compositional normal-noise images. The results further suggest that luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase.
For the second motivation, it is difficult to quantify the similarity of amplitude spectrum of different sizes as it cannot be directly interpolated. The full-reference metrics cannot be used in this situation. Hence, we show more visual examples in Figure 10(b), Figure 11(b), and Figure 12(b). The extra examples support our motivation.
D VISUALIZATION IN THE NETWORK
We show the changes of amplitude and phase in our proposed UHDFour network in Figure 13. As shown, the amplitude and phase of our final result are similar with those of ground truth. Moreover, the amplitude and phase of the low-resolution output ŷ8 are also similar with those of its corresponding ground truth y8. We wish to emphasize that although noise is related to phase, it cannot be explicitly represented in phase imagery format as phase represents the initial position of the wave. Only the combination of amplitude and phase can express a complete image. Moreover, in the feature domain, such relevance is more difficult to be represented in an imagery format. Thus, we suggest to see the similarity between the final result and ground truth, instead of the intermediate features and phase.
E MORE RESULTS ON RELEASED MODELS
We present more comparisons between state of the arts for restoring UHD low-light images in Figures 14, 15, and 16. This is similar to Figure 6 of the main paper where we compare methods using their original released models. As shown, all existing models cannot handle the UHD low-light images well. Since we cannot infer the full-resolution results of SNR-Aware Xu et al. (2022) on UHD images, despite using a GPU with 48G memory, some obvious borders appear in its results due to the stitching strategy. The phenomenon also indicates the commonly-used stitching strategy in previous UHD data processing methods is inapplicable to the challenging UHD low-light image enhancement task.
F MORE RESULTS ON RETRAINED MODELS ON UHD-LL
We provide more visual comparisons of our method with retrained state-of-the-art methods on the UHD-LL dataset in Figures 17 and 18.
As the results shown, for UHD low-light image enhancement, the retrained models on the UHD-LL dataset still cannot achieve satisfactory results. Noise and artifacts can still be found in their results. The results suggest that joint luminance enhancement and noise removal in the spatial domain is difficult. Our solution effectively handles this challenging problem by embedding Fourier transform in a cascaded network, in which luminance and noise can be decomposed to a certain extent and are processed separately.
G MORE RESULTS ON RETRAINED MODELS ON LOL-V1 AND LOL-V2 DATASETS
We also provide more visual comparisons of our method with the models that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets in Figures 19, 20, and 21.
As for the low-light images in LOL-v1 and LOL-v2 datasets, even though the mild noise and lowresolution images prohibit us from showing the full potential of our method in removing noise and processing high-resolution images, our method still achieves satisfactory performance. The results suggest the potential of our solution in different circumstances.
H ABLATION STUDY
We show some visual results of the ablated models in Figure 22. Without the Fourier branch (#1), the ablated model cannot effectively enhance luminance and remove noise. Although the result of
#2 looks better than #1, the Spatial branch (#2) also affects the final result. Directly replacing the FouSpa Block with the Residual Block of comparable parameters (#3) cannot obtain a satisfactory result, suggesting the good performance of the FouSpa Block is not because of the use of more parameters. Removing the Amplitude Modulation (#5) results in the visually unpleasing result. The Phase Guidance (#6) and Spatial branch (#7) in the Adjustment Block also contribute to the good performance of the full model. Directly replacing the Adjustment Block with the Residual Block of comparable parameters (#10) still cannot obtain a satisfactory result. The introduction of the estimation of low-resolution result (#11) also leads to a clear result. The visual comparisons further show the significance of the proposed FouSpa Block and Adjustment Block in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient.
Additionally, we list the computational and time costs of FFT/IFFT operations for processing different scales of features. In our network, we fix the feature channels to 16 and use three different scales 8×, 16×, and 32× downsampled original resolution (i.e., 3840 × 2160). The results are presented in Table 7. The FFT and IFFT operations have same computational cost. The difference in running time may be because of the different optimization strategies used in PyTorch. | 1. What is the focus of the paper regarding image enhancement?
2. What are the strengths of the proposed approach, particularly in its novel solution and dataset contribution?
3. What are the weaknesses of the paper, such as the exclusion of traditional methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the extendibility of the proposed method to other UHD image-related low-level tasks? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focuses on the Ultra-High-Definition (UHD) low-light image enhancement, which is a new task. Different from previous low-light image enhancement, the task presented in the paper faces more challenging and practical issues such as dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. The paper first shows the limitations of previous methods and datasets for processing the UHD low-light images. Then, the paper proposes to embed Fourier transform into a cascaded network to address these issues. The designs are interesting and insightful, which are based on the unique characteristics in the Fourier domain. In comparison to spatial domain, these characteristics of Fourier domain make the task easy to deal with. Besides these designs, the paper also presents the first real UHD low-light image enhancement with paired data. The paper also provides some techniques about how to capture the dataset of this kind. The proposed solution achieves the best performance on UHD low-light image enhancement and fast inference speed. Also, the solution can be extended and used for processing existing low-light image dataset.
Strengths And Weaknesses
Strengths:
New task. UHD low-light image enhancement task is becoming more and more important, especially on mobile devices. The reviewer is happy to see there is a paper focusing on this issue and proposing a new solution and the first dataset.
Novel solution. Facing challenging issues such as joint luminance enhancement and noise removal under the ultra-high-resolution constraint, this paper proposes an interesting and insightful solution. Different from previous spatial methods, this method tasks full use of the unique characteristics of Fourier domain for this task. In the Fourier domain, the amplitude and phase of a low-light image can be separately processed to avoid amplifying noise when enhancing luminance. Moreover, the proposed method can be scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. The solution is novel and interesting.
New dataset. This paper contributes the first UHD low-light image enhancement dataset with paired data. The dataset not only provides sufficient and diverse data but also presents some techniques which show how to minimize geometric and photometric misalignment when capturing the paired low-light/normal-light data. The data collection details provide some insights into the pair data collection.
Systematical analysis. This paper systematically analyzes the performance of existing low-light image enhancement methods for processing UHD images. The baselines cover almost all state-of-the-art methods, and the experiments include quantitative and quantitative results. From the results, it is clear to see the advantages and disadvantages of previous methods.
Outstanding performance. The proposed method in the paper achieves state-of-the-art performance for UHD low-light image enhancement with the fastest inference speed. Besides, the proposed method can be slightly modified and then achieves state-of-the-art performance on existing low-light image enhancement dataset despite it is not designed for data of this sort.
Sufficient experiment and analysis. The paper provides sufficient comparison experiments to show the advantages of the proposed method in the main paper and appendix. In addition, the ablation studies and motivation analysis are convincing.
Weaknesses:
The paper analyzes the performance of existing low-light image enhancement for processing UHD low-light images. The baselines are all deep learning-based methods. The reviewer wonders whether traditional methods still cannot solve this challenging issue either. It would be good if some traditional methods could be included for comparison in the final version.
For the Sec. 4.1 Benchmarking existing models, the quantitative scores in terms of non-reference metrics could be removed when the paired data is available. It is almost common that non-reference metrics are not much reliable, especially for low-light images.
In Figures 6-8 and Tables 2-4, for each method, the corresponding reference should be provided to improve the readability of the paper. It is the same for the figures in the appendix.
The reviewer wonders whether the main idea in Fourier domain can be extended to other UHD image related low-level tasks.
Clarity, Quality, Novelty And Reproducibility
The paper is of high quality and provides interesting and important insights into UHD low-light image enhancement. The paper is well-written and easy to follow. The proposed method in the paper is original and novel. Besides, the paper presents the first real UHD low-light image enhancement dataset. |
ICLR | Title
Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement
Abstract
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
N/A
Ultra-High-Definition (UHD) photo has gradually become the standard configuration in advanced imaging devices. The new standard unveils many issues in existing approaches for low-light image enhancement (LLIE), especially in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient. Unlike existing methods that address the problem in the spatial domain, we propose a new solution, UHDFour, that embeds Fourier transform into a cascaded network. Our approach is motivated by a few unique characteristics in the Fourier domain: 1) most luminance information concentrates on amplitudes while noise is closely related to phases, and 2) a high-resolution image and its low-resolution version share similar amplitude patterns. Through embedding Fourier into our network, the amplitude and phase of a low-light image are separately processed to avoid amplifying noise when enhancing luminance. Besides, UHDFour is scalable to UHD images by implementing amplitude and phase enhancement under the low-resolution regime and then adjusting the high-resolution scale with few computations. We also contribute the first real UHD LLIE dataset, UHD-LL, that contains 2,150 low-noise/normal-clear 4K image pairs with diverse darkness and noise levels captured in different scenarios. With this dataset, we systematically analyze the performance of existing LLIE methods for processing UHD images and demonstrate the advantage of our solution. We believe our new framework, coupled with the dataset, would push the frontier of LLIE towards UHD. The code and dataset are available at https://lichongyi.github.io/UHDFour/.
1 INTRODUCTION
With the advent of advanced imaging sensors and displays, Ultra-High-Definition (UHD) imaging has witnessed rapid development in recent years. While UHD imaging offers broad applications and makes a significant difference in picture quality, the extra pixels also challenge the efficiency of existing image processing algorithms.
In this study, we focus on one of the most challenging tasks in image restoration, namely low-light image enhancement (LLIE), where one needs to jointly enhance the luminance and remove inherent noises caused by sensors and dim environments. Further to these challenges, we lift the difficulty by demanding efficient processing in the UHD regime.
Despite the remarkable progress in low-light image enhancement (LLIE) (Li et al., 2021a), existing methods (Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022), as shown in Figure 1, show apparent drawbacks when they are used to process real-world UHD low-light images. This is because (1) most methods (Guo et al., 2020; Liu et al., 2021b; Ma et al., 2022) only focus on luminance enhancement and fail in removing noise; (2) some approaches (Wu et al., 2022; Xu et al., 2022) simultaneously enhance luminance and remove noise in the spatial domain, resulting in the suboptimal enhancement; and (3) existing methods (Wei et al., 2018; Zhao et al., 2021; Wu et al., 2022; Xu et al., 2022) are mainly trained on low-resolution (LR) data, leading to the incompatibility with high-resolution (HR) inputs; and (4) some studies (Xu et al., 2022; Zamir et al., 2022) adopt heavy structures, thus being inefficient for processing UHD images. More discussion on related work is provided in the Appendix.
To overcome the challenges aforementioned, we present a new idea for performing LLIE in the Fourier Domain. Our approach differs significantly from existing solutions that process images in the spatial domain. In particular, our method, named as UHDFour, is motivated by our observation of two interesting phenomena in the Fourier domain of low-light noisy images: i) luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase, and ii) the amplitude patterns of images of different resolutions are similar. These observations inspire the design of our network, which handles luminance and noise separately in the Fourier domain. This design is advantageous as it avoids amplifying noise when enhancing luminance, a common issue encountered in existing spatial domain-based methods. In addition, the fact that amplitude patterns of images of different resolutions are similar motivates us to save computation by first processing in the low-resolution regime and performing essential adjustments only in the high-resolution scale.
We also contribute the first benchmark for UHD LLIE. The dataset, named UHD-LL, contains 2,150 low-noise/normal-clear 4K UHD image pairs with diverse darkness and noise levels captured in different scenarios. Unlike existing datasets (Wei et al., 2018; Lv et al., 2021; Bychkovsky et al., 2011) that either synthesize or retouch low-light images to obtain the paired input and target sets, we capture real image pairs. During data acquisition, special care is implemented to minimize geometric and photometric misalignment due to camera shake and dynamic environment. With the new UHD-LL dataset, we design a series of quantitative and quantitative benchmarks to analyze the performance of existing LLIE methods and demonstrate the effectiveness of our method.
Our contributions are summarized as follows: (1) We propose a new solution for UHD LLIE that is inspired by unique characteristics observed in the Fourier domain. In comparison to existing LLIE methods, the proposed framework shows exceptional effectiveness and efficiency in addressing the joint task of luminance enhancement and noise removal in the UHD regime. (2) We contribute the first UHD LLIE dataset, which contains 2,150 pairs of 4K UHD low-noise/normal-clear data, covering diverse noise and darkness levels and scenes. (3) We conduct a systematical analysis of existing LLIE methods on UHD data.
2 OUR APPROACH
In this section, we first discuss our observations in analyzing low-light images in the Fourier domain, and then present the proposed solution.
2.1 OBSERVATIONS IN FOURIER DOMAIN
Here we provide more details to supplement the observations we highlighted in Sec. 1. We analyze real UHD low-light images in the Fourier domain and provide a concise illustration in Figure 2. Specifically, (a) Swapping the amplitude of a low-light and noisy (low-noise) image with that of its corresponding normal-light and clear (normal-clear) image produces a normal-light and noisy (normal-noise) image and a low-light and clear (low-clear) image. We show more examples in the Appendix. The result suggests that the luminance and noise can be decomposed to a certain extent in the Fourier domain. In particular, most luminance information is expressed as amplitudes, and
noises are revealed in phases. This inspires us to process luminance and noise separately in the Fourier domain. (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart. Such a characteristic offers us the possibility to first enhance the amplitude of an LR scale with more computations and then only make minor adjustments in the HR scale. In this way, most computations can be conducted in the LR space, reducing the computational complexity.
2.2 THE UHDFOUR NETWORK
C
FF T
C on
v
IF
FT
IN
Spatial branch
Fourier branch
FouSpa Block A
P
guidance
FF T
C on
v
IF FT
IN
C
Spatial branch
Fourier branch
��
A
P �����
Adjustment Block modulation
Overview. UHDFour aims to map an UHD low-noise input image x ∈ RH×W×C to its corresponding normal-clear version y ∈ RH×W×C , where H , W , and C represent height, width, and channel, respectively. Figure 3 shows the overview of UHDFour. It consists of an LRNet and an HRNet.
Motivated by the observation in Sec. 2.1, LRNet takes the most computation of the whole network. Its input is first embedded into the feature domain by a Conv layer. To reduce computational complexity, we downsample the features to 1/8 of the original resolution by bilinear interpolation. Then, the LR features go through an encoder-decoder network, which contains four FouSpa Blocks with two 2×downsample and two 2×upsample operations, obtaining outputs features. The outputs features are respectively fed to FFT to obtain the refined amplitude Ar and phase Pr features and a Conv layer to estimate the LR normal-clear image ŷ8 ∈ RH/8×W/8×C . The outputs of LRNet coupled with the input are fed to the HRNet. Specifically, the input x is first reshaped to xpu ∈ RH×W×C×64 via PixelUnshuffle (8× ↓) to preserve original information, and then fed to an Adjustment Block. With the refined amplitude Ar and phase Pr features, the Adjustment Block produces adjusted features that are reshaped to the original height and width of input x via Pixelshuffle (8× ↑). Finally, we resize the estimated LR normal-clear image ŷ8 to the original size of input x via bilinear interpolation and combine it with the upsampled features to estimate the final HR normal-clear image ŷ. We detail the key components as follows.
FouSpa Block. In Sec. 2.1, we observe that luminance and noise can be decomposed in the Fourier domain. Hence, we design the FouSpa Block to parallelly implement amplitude and phase enhancement in the Fourier domain and feature enhancement in the spatial domain. As shown in Figure 4(a), the input features are forked into the Fourier and Spatial branches. In the Fourier branch, FFT is first used to obtain the amplitude component (A) and phase component (P ). The two components are separately fed to two Conv layers with 1×1 kernel. Note that when processing amplitude and phase, we only use 1×1 kernel to avoid damaging the structure information. Then, we transform them back to the spatial domain via IFFT and concatenate them with the spatial features enhanced by a Half Instance Normalization (HIN) unit (Chen et al., 2021a). We adopt the HIN unit based on its efficiency. The concatenated features are further fed to a Conv layer and then combined with the input features in a residual manner. Although our main motivations are in the Fourier domain, the use of the spatial branch is necessary. This is because the spatial branch and Fourier branch are complementary. The spatial branch adopts convolution operations that can model the structure dependency well in spatial domain. The Fourier branch can attend global information and benefit the disentanglement of energy and degradation.
Adjustment Block. The Adjustment Block is the main structure of the HRNet, and it is lightweight. As shown in Figure 4(b), the Adjustment Block shares a similar structure with the FouSpa Block. Differently, in the Fourier branch, with the refined amplitude Ar features obtained from the LRNet, we use Spatial Feature Transform (SFT) (Wang et al., 2018) to modulate the amplitude features of the input xpu via simple affine transformation. Such a transformation or adjustment is possible because the luminance, as global information, manifests as amplitude components, and the amplitude patterns of an HR scale and its LR scales are similar (as discussed in Sec. 2.1). Note that we cannot modulate the phase because of its periodicity. Besides, we do not find an explicit relationship between the HR scale’s phase and its LR scales. However, we empirically find that concatenating the refined phase Pr features achieved from the LRNet with the phase features of the input xpu improves the final performance. We thus apply such concatenation in our solution.
Losses. We use l1 to supervise ŷ8 and ŷ. We also add perceptual loss to supervise ŷ8 while the use of perceptual loss on ŷ is impracticable because of its high resolution. Instead, we add SSIM loss Lssim on ŷ. The final loss L is the combination of these losses:
L = ∥ŷ−y∥1+0.0004×Lssim(ŷ, y)+0.1×∥ŷ8−y8∥1+0.0002×∥VGG(ŷ8)−VGG(y8)∥2, (1)
where y is the ground truth, y8 is the 8× downsampled version of y, VGG is the pre-trained VGG19 network, in which we use four scales to supervise training Zhou et al. (2022).
3 UHD-LL DATASET
We collect a real low-noise/normal-clear paired image dataset that contains 2,150 pairs of 4K UHD data saved in 8bit sRGB format. Several samples are shown in Figure 5.
Images are collected from a camera mounted on a tripod to ensure stability. Two cameras, i.e.., a Sony α7 III camera and a Sony Alpha a6300 camera, are used to offer diversity. The ground truth (or normal-clear) image is captured with a small ISO ∈ [100, 800] in a bright scene (indoor or outdoor). The corresponding low-noise image is acquired by increasing the ISO ∈ [1000, 20000] and reducing the exposure time. Due to the constraints of exposure gears in the cameras, shooting in the large ISO range may produce bright images, which opposes the purpose of capturing low-light and noisy
images. Thus, in some cases, we put a neutral-density (ND) filter with different ratios on the camera lens to capture low-noise images. In this way, we can increase the ISO to generate heavier noises and simultaneously obtain extremely dark images, enriching the diversity of darkness and noise levels.
The main challenge of collecting paired data is to reduce misalignment caused by camera shakes and dynamic objects. We take several measures to ameliorate the issue. Apart from using a tripod, we also use remote control software (Imaging Edge) to adjust the exposure time and ISO value to avoid any physical contact with the camera. To further reduce subtle misalignments, we adopt an image alignment algorithm (Evangelidis & Psarakis, 2008) to estimate the affine matrix and align the low-light image and its ground truth. We improve the alignment method by applying AdaIN (Huang & Belongie, 2017) before the affine matrix estimation to reduce the intensity gap. Finally, we hire annotators to check all paired images carefully and discard those that still exhibit misalignments.
We split the UHD-LL dataset into two parts: 2,000 pairs for training and 115 pairs for testing. The training and test partitions are exclusive in their scene and data. We also ensure consistency in pixel intensity distribution between the training and test splits. More analysis of this data, e.g.., the pixel intensity and Signal-to-Noise Ratio (SNR) distributions, can be found in the Appendix.
A comparison between our UHD-LL dataset and existing paired low-light image datasets is presented in Table 1. The LOL dataset (two versions: LOL-v1: 500 images; LOL-v2: 789 images) is most related to our UHD-LL dataset as both focus on real low-light images with noise. The LOL-v2 contains all images of the LOL-v1. In contrast to the LOL dataset, our dataset features a more extensive collection, where diverse darkness and noise levels from rich types of scenes are considered. Moreover, the images of our dataset have higher resolutions than those from the LOL dataset. As shown in Figure 1, the models pre-trained on the LOL dataset cannot handle the cases in our UHDLL dataset due to its insufficient training data, which are low-resolution and contains mostly mild noises. Different from SID dataset that focuses on RAW data, our dataset only studies the data with RGB format. The images in the SID dataset are captured in extremely dark scenes. Its diversity of darkness levels and scenes is limited. When these RAW data with extremely low intensity are transformed into sRGB images, some information would be truncated due to the bit depth constraints of 8bit sRGB image. In this case, it is challenging to train a network for effectively mapping noise and low-light images to clear and normal-light images using these sRGB images as training data.
4 EXPERIMENTS
Implementation. We implement our method with PyTorch and train it on six NVIDIA Tesla V100 GPUs. We use an ADAM optimizer for network optimization. The learning rate is set to 0.0001. A batch size of 6 is applied. We fix the channels of each Conv layer to 16, except for the Conv layers associated with outputs. We use the Conv layer with stride = 2 and 4×4 kernels to implement the 2× downsample operation in the encoder and interpolation to implement the 2× upsample operation in the decoder in the LRNet. Unless otherwise stated, the Conv layer uses stride = 1 and 3×3 kernels. We use the training data in the UHD-LL dataset to train our model. Images are randomly cropped into patches of size 512 × 512 for training. Compared Methods. We include 14 state-of-the-art methods (21 models in total) for our benchmarking study and performance comparison. These methods includes 12 light enhancement methods: NPE (TIP’13) (Wang et al., 2013) SRIE (CVPR’16) (Fu et al., 2016), DRBN (CVPR’20) (Yang et al., 2020a), Zero-DCE (CVPR’20) (Guo et al., 2020), Zero-DCE++ (TPAMI’21) (Li et al., 2021b), RUAS (CVPR’21) (Liu et al., 2021b), Zhao et al. (ICCV’21) (Zhao et al., 2021), Enlighten-
GAN (TIP’21) (Jiang et al., 2021), Afifi et al. (CVPR’21) (Afifi et al., 2021), SCI (CVPR’22) (Ma et al., 2022), SNR-Aware (CVPR’22) (Xu et al., 2022), URetinex-Net (CVPR’22) (Wu et al., 2022) and 2 Transformers: Uformer (CVPR’22) (Wang et al., 2022) and Restormer (CVPR’22) (Zamir et al., 2022). We use their released models and also retrain them using the same training data as our method. Note that some methods provide different models trained using different datasets. Due to the heavy models used in Restormer (Zamir et al., 2022) and SNR-Aware (Xu et al., 2022), we cannot infer the full-resolution results of both methods on UHD images, despite using a GPU with 48G memory. Following previous UHD study (Zheng et al., 2021), we resort to two strategies for this situation: (1) We downsample the input to the largest size that the model can handle and then resize the result to the original resolution, denoted by the subscript ‘resize’. (2) We split the input into four patches without overlapping and then stitch the result, denoted by the subscript ‘stitch’.
Evaluation Metrics. We employ full-reference image quality assessment metrics PSNR, SSIM (Wang et al., 2004), and LPIPS (Alex version) (Zhang et al., 2018) to quantify the performance of different methods. We also adopt the non-reference image quality evaluator (NIQE) (Mittal et al., 2013) and the multi-scale image quality Transformer (MUSIQ) (trained on KonIQ-10k dataset) (Ke et al., 2021) for assessing the restoration quality. We notice that the quantitative results reported by different papers diverge. For a fair comparison, we adopt the commonly-used IQA PyTorch Toolbox1 to compute the quantitative results of all compared methods. We also test the trainable parameters and running time for processing UHD 4K data.
4.1 BENCHMARKING EXISTING MODELS
To validate the performance of existing LLIE methods that were trained using their original training data, we directly use the released models for evaluation on the UHD low-light images. These original training datasets include LOL (Wei et al., 2018), MIT-Adobe-FiveK (Bychkovsky et al., 2011), Exposure-Errors (Afifi et al., 2021), SICE (Cai et al., 2018), LSRW (Hai et al., 2024), and DarkFace (Yang et al., 2020b). EnlightenGAN uses the assemble training data from existing datasets (Wei et al., 2018; Dang-Nguyen et al., 2015; Kalantari & Ramamoorthi, 2017b; Cai et al., 2018). In addition, the LOL-v1 and LOL-v2 contain real low-light images while LOL-syn is a synthetic dataset. Due to the limited space, we only show relatively good results. As shown in Figure 6, all methods can improve the luminance of the input image. However, they fail to produce visually pleasing results. DRBN and EnlightenGAN introduce artifacts. RUAS-LOL and RUAS-DarkFace yield over-exposed results. Color deviation is observed in the results of EnlightenGAN and Afifi et al. All methods cannot handle the noise well and even amplify noise.
We also summarize the quantitative performance of different methods and verify the effectiveness of commonly used non-reference metrics for UHD low-light images in Table 2. URetinex-Net achieves the highest PSNR score while SNR-Aware-LOLv1 is the best performer in terms of SSIM and
1https://github.com/chaofengc/IQA-PyTorch
LPIPS. For non-reference metrics, SCI-difficult, Zhao et al.-LOL, and RUAS-LOL are the winners under MUSIQ, NIQE, and NIMA, respectively. From Figure 6 and Table 2, we found the non-reference metrics designed for generic image quality assessment cannot accurately assess the subjective quality of the enhanced UHD low-light images. For example, RUAS-LOL suffers from obvious over-exposure in the result while it is the best performer under the NIMA metric.
In summary, the performance of existing released models is unsatisfactory when they are used to enhance the UHD low-light images. The darkness, noise, and artifacts still exist in the results. Compared with luminance enhancement, noise is the more significant challenge for these methods. No method can handle the noise issue well. The joint task of luminance enhancement and noise removal raises a new challenge for LLIE, especially under limited computational resources. We also observe a gap between visual results and the scores of non-reference metrics for UHD LLIE. The gap calls for more specialized non-reference metrics for UHD LLIE.
4.2 COMPARING RETRAINED MODELS
Besides the released models, we also retrain existing methods on our UHD-LL training data and compare their performance with our method. Due to the limited space, we only compare our method with several good performers. More results can be found in the Appendix. As shown in Figure 7, our UHDFour produces a clear and normal-light result close to the ground truth. In comparison, ZeroDCE++, RUAS, Afifi et al., SCI, and Restormer experience color deviations. Zero-DCE, ZeroDCE++, RUAS, Zhao et al., Afifi et al., and SCI cannot remove the noise due to the limitations of their network designs. These methods mainly focus on luminance enhancement. SNR-Aware, Uformer, and Restormer have strong modeling capability because of the use of Transformer structures. However, the three methods still leave noise on the results and introduce artifacts.
The quantitative comparison is presented in Table 3. Our UHDFour achieves state-of-the-art performance in terms of PSNR, SSIM, and LPIPS scores and outperforms the compared methods with a
large margin. The Transformer-based SNR-Aware and Restormer rank the second best. Our method has the fastest processing speed for UHD images as most computation is conducted in the LR space.
To further verify the effectiveness of our network, we compare our approach with several methods, including Retinex-Net Wei et al. (2018), Zero-DCE (Guo et al., 2020), AGLLNet (Lv et al., 2021), Zhao et al. (Zhao et al., 2021), RUAS (Liu et al., 2021b), SCI (Ma et al., 2022), and URetinex-Net (Wu et al., 2022), that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets (Wei et al., 2018). Due to the mild noise and low-resolution images in
the LOL-v1 and LOL-v2 datasets, we change the 8× downsample and upsample operations to 2× and retrain our network. And such characteristics of LOL-v1 and LOL-v2 datasets prohibit us from showing the full potential of our method in removing noise and processing high-resolution images. Even though our goal is not to pursue state-of-the-art performance on the LOL-v1 and LOL-v2 datasets, our method achieves satisfactory performance as presented in Table 4. The visual results are provided in the Appendix.
4.3 ABLATION STUDY
We present ablation studies to demonstrate the effectiveness of the main components in our design. For the FouSpa Block, we remove the Fourier branch (FB) (#1), remove the Spatial branch (SB) (#2), and replace the FouSpa Block (i.e.., without FB and SB) with the Residual Block of comparable parameters (#3). We also replace the FB with the SB (i.e.., using two SB) (#4). For the Adjustment Block, we remove the Amplitude Modulation (AM) (#5), remove the Phase Guidance (PG) (#6), remove the SB (#7), and remove both AM and PG (#8). We also replace the AM and PG with two SB (#9), and
replace the Adjustment Block with the Residual Block of comparable parameters (#10). For the final output, we remove the concatenation of the LR normal-clear result (ŷ8), indicated as #11. We also replace all FB with SB, indicated as #12. Unless otherwise stated, all training settings remain unchanged as the implementation of full model, denoted as #13.
The quantitative comparison of the ablated models on the UHD-LL testing set is presented in Table 5. We also show the visual comparison of some ablated models in the Appendix. As shown, all the key designs contribute to the best performance of the full model. Without the Fourier branch (#1), the quantitative scores significantly drop. The result suggests that processing amplitude and phase separately improves the performance of luminance enhancement and noise removal. From the results of #2, the Spatial branch also boosts the performance. However, replacing the FouSpa Block with the Residual Block (#3) cannot achieve comparable performance with the full model (#13), indicating the effectiveness of the FouSpa Block. For the Adjustment Block, the Amplitude Modulation (#5), Phase Guidance (#6), and Spatial branch (#7) jointly verify its effectiveness. Such a block cannot be replaced by a Residual Block (#10). From the results of #11, we can see that it is necessary to estimate the LR result. In addition, replacing the Fourier branch with spatial branch (#4,#9,#12) cannot achieve comparable performance with the full model (#13), showing the efficacy of Fourier branch.
5 CONCLUSION
The success of our method is inspired by the characteristics of real low-light and noisy images in the Fourier domain. Thanks to the unique design of our network that handles luminance and noises in the Fourier domain, it outperforms state-of-the-art methods in UHD LLIE with appealing efficiency. With the contribution of the first real UHD LLIE dataset, it becomes possible to compare existing methods with real UHD low-light images. Our experiments are limited to image enhancement; we have not provided data and benchmarks in the video domain. Our exploration has not considered adversarial losses due to memory constraints. Moreover, as our data is saved in sRGB format, the models trained on our data may fail in processing the extreme cases, in which the information is lost due to the limited bit depth. HDR data may be suitable for these cases. Nevertheless, we believe our method and the dataset can bring new opportunities and challenges to the community. The usefulness of Fourier operations may go beyond our work and see potential in areas like image decomposition and disentanglement. With improved efficiency, it may be adopted for applications that demand real-time response, e.g.., enhancing the perception of autonomous vehicles in the dark.
6 ETHICS STATEMENT
This study focuses on low-light image enhancement and does not involve any ethics issues. The dataset proposed in this paper also does not involve privacy issues.
7 ACKNOWLEDGEMENT
This study is supported under the RIE2020 Industry Alignment Fund Industry Collaboration Projects (IAF-ICP) Funding Initiative, as well as cash and in-kind contribution from the industry partner(s). It is also partially supported by the NTU NAP grant. Chun-Le Guo is also supported by MindSpore, CANN, and Ascend AI Processor.
A RELATED WORK
Low-Light Image Enhancement Methods. The focus of our work is on deep learning-based LLIE methods (Li et al., 2021a; Liu et al., 2021a). Wang et al. (2019) proposed a network to enhance the underexposed photos by estimating an image-to-illumination mapping. EnlightenGAN (Jiang et al., 2021) proposed an attention-guided network to make the generated results indistinguishable from real normal-light images. By formulating light enhancement as a task of image-specific curve estimation that can be trained with non-reference losses, Zero-DCE (Guo et al., 2020) and Zero-DCE++ (Li et al., 2021b) obtain good brightness enhancement. Zhao et al. (2021) treated underexposed image enhancement as image feature transformation between the underexposed image and its paired enhanced version. Liu et al. (2021b) proposed a Retinex model-inspired unrolling method, in which the network structure is obtained by neural architecture search. Afifi et al. (2021) proposed a coarseto-fine network for exposure correction. Ma et al. (2022) proposed a self-calibrated illumination learning framework using unsupervised losses. Wu et al. (2022) combined the Retinex model with a deep unfolding network, which unfolds an optimization problem into a learnable network. Xu et al. (2022) proposed to exploit the Signal-to-Noise Ratio (SNR)-aware Transformer and convolutional models for LLIE. In this method, long-range attention is used for the low SNR regions while the short-range attention (convolutional layers) is for other regions.
Different from these works, our network takes the challenging joint luminance enhancement, noise removal, and high resolution constraint of UHD low-light images into account in the Fourier domain, endowing new insights on UHD LLIE and achieving better performance.
Low-Light Image Enhancement Datasets. LOL (Wei et al., 2018) dataset contains pairs of low/normal-light images saved in RGB format, in which the low-light images are collected by changing the exposure time and ISO. Due to the small size, it only covers a small fraction of the noise and darkness levels. MIT-Adobe FiveK (Bychkovsky et al., 2011) dataset includes paired low-/highquality images, where the high-quality images are retouched by five experts. The low-quality images are treated as low-light images in some LLIE methods. However, this dataset is originally collected for global tone adjustment, and thus, it ignores noise in its collection. Based on the MIT-Adobe FiveK dataset, a multi-exposure dataset, Exposure-Errors, is rendered to emulate a wide range of exposure errors (Afifi et al., 2021). Similar to the MIT-Adobe FiveK dataset, the Exposure-Errors dataset also neglects the noise issue. SID (Chen et al., 2018) is a RAW data dataset.
The images of the SID dataset have two different sensor patterns (i.e.., Bayer pattern and APS-C X-Trans pattern). Due to the specific data pattern, the deep models trained on this dataset are not versatile as they require the Raw data with the same pattern as input. Besides, a long-exposure reference image corresponds to multiple short-exposure images, leading to limited scene diversity.
Unlike existing LLIE datasets that either omit noise, capture limited numbers of images, require specific sensor patterns, or exclude UHD images, we propose a real UHD LLIE dataset that contains low-noise/normal-clear image pairs with diverse darkness and noise levels captured in different scenarios. A comprehensive comparison is presented in Sec. 3.
Image Decomposition-based Enhancement. There are some image decomposition-based enhancement methods. For LLIE, Xu et al. (2020) proposed a frequency-based decomposition-andenhancement model, which suppresses noises in the low-frequency layer and enhances the details in the high-frequency layer. Yang et al. (2020a) proposed a band representation-based semi-supervised model. This method consists of two stages: recursive band learning and band recomposition. Wei et al. (2018) also decomposed a low-light image into an illumination component and a reflectance component according to the Retinex model and then separately enhance the components. Image decomposition was also used in shadow removal, in which two networks are used to predict shadow parameters and matter layer (Le & Samaras, 2019).
In addition, assuming that phase preserves high-level semantics while the amplitude contains lowlevel features, (Guo et al., 2022) proposed a FPNet that consists of two stages for image de-raining. The first state is to restore the amplitude of rainy images. The second state then refines the phase of
the restored rainy images. To solve the limitations of ResBlock that may overlook the low-frequency information and fails to model the long-distance information, (Mao et al., 2021) proposed a Residual Fast Fourier Transform with Convolution Block for image deblurring. The Block contains a spatial residual stream and a FFT stream. (Pham et al., 2021) proposed a complex valued neural network with Fourier transform for image denoising. The complex valued network first converts the noisy image to complex value via Fourier transform, then estimates a complex filter which is applied to the converted noisy image for approaching the complex value of the ground truth image.
Although these methods decompose an image or use Fourier transform in the networks, our design has different motivations. Our motivations are inspired by the uniqueness of the Fourier domain for UHD low-light image enhancement as presented in Figure 2, i.e., luminance and noise can be ‘decomposed’ to a certain extent in the Fourier domain and HR image and its LR versions share similar amplitude patterns. We also design the specific blocks to use these characteristics and solve the high-resolution constraint issue, which have not been explored in previous works.
HDR Imaging. There are some high dynamic range (HDR) reconstruction works (Salih et al., 2012; Wang & Yoon, 2021) that are related to our low-light image enhancement. The HDR reconstruction also can be grouped into multi-image HDR reconstruction and single-image reconstruction. The multi-image methods require to fuse the multiple bracketed exposure low dynamic range (LDR) images. To mitigate artifacts caused by image fusion, several technologies (Srikantha & Sidibe, 2012) have been proposed. For single-image methods, deep learning has achieved impressive performance. In addition to learning end-to-end LDR-to-HDR networks (Kalantari & Ramamoorthi, 2017a; Wu et al., 2018; Yang et al., 2018; Zhang & Lalonde, 2017; Hu et al., 2022b; Eilertsen et al., 2017) some methods either synthesize multiple LDR images with different exposures (Endo et al., 2017) or model the inverse process of the image formation pipeline (Liu et al., 2020). Besides, some works also focus on HDR reconstruction and denoising. For example, (Hu et al., 2022a) proposes a joint multi-scale denoising and tone-mapping framework, which prevents the tone-mapping operator from overwhelming the denoising operator. (Chen et al., 2021b) takes the noise and quantization into consideration for designing the HDR reconstruction network.
B ANALYSIS OF UHD-LL DATASET
We first show the intensity histograms and the SNR distribution of our UHD-LL dataset in Figure 8. The SNR is computed using the same algorithm as the recent LLIE method (Xu et al., 2022). As shown in Figure 8(a) and Figure 8(b), when splitting the training and testing sets, we make the pixel intensity distributions of the training and testing sets consistent to guarantee the rationality of the dataset split. We also plot the SNR distribution of the dataset to show the noise levels in Figure 8(c). The SNR distribution suggests the wide and challenging SNR ranges of our dataset. We show more samples and amplify the resolution of our UHD-LL data in Figure 9.
C FURTHER ANALYSIS OF MOTIVATION
Recall that in Sec. 2.1, we discussed two observations that serve as the motivation to design our network. In particular, (a) Swapping the amplitude of a low-noise image with that of its corresponding normal-clear image produces a normal-noise image and a low-clear image, and (b) The amplitude patterns of an HR normal-clear image and its LR versions are similar and are different from the corresponding HR low-noise counterpart.
We first show more motivation cases in Figures 10, 11, and 12. These visual results suggest the same tendency as the motivations shown in the main paper.
low-noise
normal-clear low-clear
normal-noise
Amplitude
Amplitude
Phase
Phase
(a)
real
real
compositional
compositional Amplitude Amplitude
normal-clear low-noisenormal-clear normal-clear
4X downsample 8X downsample
(b)
full resolution full resolution
Amplitude Amplitude
To further analyze our first motivation, we compare the luminance and noise of real normal-clear and low-noise images and compositional low-clear and normal-noise images. To compare the luminance similarity, we compute the average luminance. For real noise level measurement, there is no corresponding metric. Thus, we use the recent multi-scale Transformer MUSIQ (K et al., 2021) for image quality assessment. MUSIQ is not sensitive to luminance changes. Moreover, it can be used to measure the noise level as its training dataset contains noisy data and it shows state-of-the-art performance for assessing the quality of natural images. A large MUSIQ value reflects better image quality with less noise and artifacts. We select 50 images from the UHD-LL dataset and compare the average scores. We present the quantitative results in Table 6.
As presented in Table 6, the real normal-clear images have similar luminance values with the compositional normal-noise images while they have similar high MUSIQ values with the compositional low-clear images. Similarly, the real low-noise images have similar luminance values with the compositional low-clear images while they have similar low MUSIQ values with the compositional normal-noise images. The results further suggest that luminance and noise can be decomposed to a certain extent in the Fourier domain. Specifically, luminance would manifest as amplitude while noise is closely related to phase.
For the second motivation, it is difficult to quantify the similarity of amplitude spectrum of different sizes as it cannot be directly interpolated. The full-reference metrics cannot be used in this situation. Hence, we show more visual examples in Figure 10(b), Figure 11(b), and Figure 12(b). The extra examples support our motivation.
D VISUALIZATION IN THE NETWORK
We show the changes of amplitude and phase in our proposed UHDFour network in Figure 13. As shown, the amplitude and phase of our final result are similar with those of ground truth. Moreover, the amplitude and phase of the low-resolution output ŷ8 are also similar with those of its corresponding ground truth y8. We wish to emphasize that although noise is related to phase, it cannot be explicitly represented in phase imagery format as phase represents the initial position of the wave. Only the combination of amplitude and phase can express a complete image. Moreover, in the feature domain, such relevance is more difficult to be represented in an imagery format. Thus, we suggest to see the similarity between the final result and ground truth, instead of the intermediate features and phase.
E MORE RESULTS ON RELEASED MODELS
We present more comparisons between state of the arts for restoring UHD low-light images in Figures 14, 15, and 16. This is similar to Figure 6 of the main paper where we compare methods using their original released models. As shown, all existing models cannot handle the UHD low-light images well. Since we cannot infer the full-resolution results of SNR-Aware Xu et al. (2022) on UHD images, despite using a GPU with 48G memory, some obvious borders appear in its results due to the stitching strategy. The phenomenon also indicates the commonly-used stitching strategy in previous UHD data processing methods is inapplicable to the challenging UHD low-light image enhancement task.
F MORE RESULTS ON RETRAINED MODELS ON UHD-LL
We provide more visual comparisons of our method with retrained state-of-the-art methods on the UHD-LL dataset in Figures 17 and 18.
As the results shown, for UHD low-light image enhancement, the retrained models on the UHD-LL dataset still cannot achieve satisfactory results. Noise and artifacts can still be found in their results. The results suggest that joint luminance enhancement and noise removal in the spatial domain is difficult. Our solution effectively handles this challenging problem by embedding Fourier transform in a cascaded network, in which luminance and noise can be decomposed to a certain extent and are processed separately.
G MORE RESULTS ON RETRAINED MODELS ON LOL-V1 AND LOL-V2 DATASETS
We also provide more visual comparisons of our method with the models that were pre-trained or fine-tuned on the LOL-v1 and LOL-v2 datasets in Figures 19, 20, and 21.
As for the low-light images in LOL-v1 and LOL-v2 datasets, even though the mild noise and lowresolution images prohibit us from showing the full potential of our method in removing noise and processing high-resolution images, our method still achieves satisfactory performance. The results suggest the potential of our solution in different circumstances.
H ABLATION STUDY
We show some visual results of the ablated models in Figure 22. Without the Fourier branch (#1), the ablated model cannot effectively enhance luminance and remove noise. Although the result of
#2 looks better than #1, the Spatial branch (#2) also affects the final result. Directly replacing the FouSpa Block with the Residual Block of comparable parameters (#3) cannot obtain a satisfactory result, suggesting the good performance of the FouSpa Block is not because of the use of more parameters. Removing the Amplitude Modulation (#5) results in the visually unpleasing result. The Phase Guidance (#6) and Spatial branch (#7) in the Adjustment Block also contribute to the good performance of the full model. Directly replacing the Adjustment Block with the Residual Block of comparable parameters (#10) still cannot obtain a satisfactory result. The introduction of the estimation of low-resolution result (#11) also leads to a clear result. The visual comparisons further show the significance of the proposed FouSpa Block and Adjustment Block in dealing with the intricate issue of joint luminance enhancement and noise removal while remaining efficient.
Additionally, we list the computational and time costs of FFT/IFFT operations for processing different scales of features. In our network, we fix the feature channels to 16 and use three different scales 8×, 16×, and 32× downsampled original resolution (i.e., 3840 × 2160). The results are presented in Table 7. The FFT and IFFT operations have same computational cost. The difference in running time may be because of the different optimization strategies used in PyTorch. | 1. What is the main contribution of the paper regarding image enhancement?
2. What are the strengths and weaknesses of the proposed UHDFour method?
3. Do you have any concerns or questions about the two observations driving the method's design?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a new method (UHDFour) for enhancing ultra high-resolution low-light noisy images. The proposed method is based on two observations. The first observation is that most luminance information exists on amplitudes while noise exists on phases of the Fourier domain. This observation drives the proposed UHDFour method to process amplitudes and phases separately for luminance enhancement and noise removal. The second observation is that a high-resolution image and its corresponding low-resolution version have similar amplitude patterns, which results in a cascaded network design that first processes a low-resolution image and then processes a high-resolution one. The authors construct a new ultra high-resolution image dataset (UHD-LL). The authors conduct experiments to justify the motivations and the effectiveness of proposed method.
Strengths And Weaknesses
Strong points.
A new method is proposed for ultra high-resolution low-light noisy image enhancement with notable performance advantages against existing methods.
The network design has a clear motivation with some interesting experiments to justify the observations.
A new ultra high-resolution dataset of 2150 low-noise/normal-clear paired 4K images is constructed.
The paper is written well and easy to follow.
Code and dataset will be released.
Weak points.
The first observation is justified via visual illustrations and quantitative experiments while the second one is illustrated by visual comparisons. It would be better if the authors can provide some interpretations. Can authors explain from the FFT why these phenomena happen? The second observation only mentions that HR images and their corresponding LR images share similar amplitude patterns, but how about the phases? Do they have similar patterns? If not, would the noise removal in LR version affect that in HR images?
In the 3rd paragraph, Introduction, there are some vague descriptions when talking about existing methods, e.g., "existing methods are mainly..." and "some studies...". I suggest adding references here so that readers can have direct references.
The LRNet/FouSpa block designs seem not to be aligned with the observations. For the FouSpa block, it is mentioned that "we observe that luminance and noise can be decomposed in the Fourier domain", but the design (Fig. 4) shows that the Fourier branch only separately processes the A and P, then combines them together using IFFT. There is still a Spatial branch that works with the Fourier branch. It is not clear why there should still have a spatial branch and how the Fourier branch enhances luminance in A and removes noise in P. Regarding the LRNet design (Fig. 3), it is not clear why FFT and Conv layers are paralleled. Further, do A_r and P_r have any supervision signals? Would the authors show some intermediate visualizations of A_r and P_r?
Regarding the dataset acquisition, it seems that images are taken in normal-light scenes, and low-light images are captured with under-exposures. In this case, it seems the noise would not be very severe (e.g., in Figure 5 it is hard to see noise). This makes this paper more related to under-exposure image enhancement but not low-light image enhancement. It would be better if the authors could provide more justifications for this issue. The dataset split is 2000 for training and 115 pairs for test (The train/test ratio is 20:1), which I think is extreme. It would be better if the authors could justify this.
A minor issue is that "Table 5" on page 8 should be "Table 4".
In Section A of the Appendix, I think it would be better if the authors can discuss how FFT is used in other low-level vision tasks, e.g., image deraining, denoising, and deblurring, and highlight the differences to the proposed method.
Clarity, Quality, Novelty And Reproducibility
The paper is generally written well and easy to read. The paper shows some interesting observations, based on which it presents a new method for ultra-high-resolution image enhancement. A new dataset is constructed and will be released to the public together with the code, as the authors promise. |
ICLR | Title
The Dark Side of AutoML: Towards Architectural Backdoor Search
Abstract
This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks? Specifically, we present EVAS, a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers. Compared with existing attacks, EVAS demonstrates many interesting properties: (i) it does not require polluting training data or perturbing model parameters; (ii) it is agnostic to downstream fine-tuning or even re-training from scratch; (iii) it naturally evades defenses that rely on inspecting model parameters or training data. With extensive evaluation on benchmark datasets, we show that EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary’s design spectrum. We further characterize the mechanisms underlying EVAS, which are possibly explainable by architecture-level “shortcuts” that recognize trigger patterns. This work showcases that NAS can be exploited in a harmful way to find architectures with inherent backdoor vulnerability. The code is available at https://github.com/ain-soph/nas_backdoor.
1 INTRODUCTION
As a new paradigm of applying ML techniques in practice, automated machine learning (AutoML) automates the pipeline from raw data to deployable models, which covers model design, optimizer selection, and parameter tuning. The use of AutoML greatly simplifies the ML development cycles and propels the trend of ML democratization. In particular, neural architecture search (NAS), one primary AutoML task, aims to find performant deep neural network (DNN) arches1 tailored to given datasets. In many cases, NAS is shown to find models remarkably outperforming manually designed ones (Pham et al., 2018; Liu et al., 2019; Li et al., 2020).
In contrast to the intensive research on improving the capability of NAS, its security implications are largely unexplored. As ML models are becoming the new targets of malicious attacks (Biggio & Roli, 2018), the lack of understanding about the risks of NAS is highly concerning, given its surging popularity in security-sensitive domains (Pang et al., 2022). Towards bridging this striking gap, we pose the intriguing yet critical question:
Is it possible for the adversary to exploit NAS to launch previously improbable attacks?
This work provides an affirmative answer to this question. We present exploitable and vulnerable arch search (EVAS), a new backdoor attack that leverages NAS to find neural arches with inherent, exploitable vulnerability. Conventional backdoor attacks typically embed the malicious functions (“backdoors”) into the space of model parameters. They often assume strong threat models, such as polluting training data (Gu et al., 2017; Liu et al., 2018; Pang et al., 2020) or perturbing model parameters (Ji et al., 2018; Qi et al., 2022), and are thus subject to defenses based on model inspection (Wang et al., 2019; Liu et al., 2019) and data filtering (Gao et al., 2019). In EVAS, however, as the backdoors are carried in the space of model arches, even if the victim trains the models using clean data and operates them in a black-box manner, the backdoors are still retained. Moreover, due
1In the following, we use “arch” for short of “architecture”.
to its independence of model parameters or training data, EVAS is naturally robust against defenses such as model inspection and input filtering.
To realize EVAS, we define a novel metric based on neural tangent kernel (Chen et al., 2021), which effectively indicates the exploitable vulnerability of a given arch; further, we integrate this metric into the NAS-without-training framework (Mellor et al., 2021; Chen et al., 2021). The resulting search method is able to efficiently identify candidate arches without requiring model training or backdoor testing. To verify EVAS’s empirical effectiveness, we evaluate EVAS on benchmark datasets and show: (i) EVAS successfully finds arches with exploitable vulnerability, (ii) the injected backdoors may be explained by arch-level “shortcuts” that recognize trigger patterns, and (iii) EVAS demonstrates high evasiveness, transferability, and robustness against defenses. Our findings show the feasibility of exploiting NAS as a new attack vector to implement previously improbable attacks, raise concerns about the current practice of NAS in security-sensitive domains, and point to potential directions to develop effective mitigation.
2 RELATED WORK
Next, we survey the literature relevant to this work.
Neural arch search. The existing NAS methods can be categorized along search space, search strategy, and performance measure. Search space – early methods focus on the chain-of-layer structure (Baker et al., 2017), while recent work proposes to search for motifs of cell structures (Zoph et al., 2018; Pham et al., 2018; Liu et al., 2019). Search strategy – early methods rely on either random search (Jozefowicz et al., 2015) or Bayesian optimization (Bergstra et al., 2013), which are limited in model complexity; recent work mainly uses the approaches of reinforcement learning (Baker et al., 2017) or neural evolution (Liu et al., 2019). Performance measure – one-shot NAS has emerged as a popular performance measure. It considers all candidate arches as different sub-graphs of a super-net (i.e., the one-shot model) and shares weights between candidate arches (Liu et al., 2019). Despite the intensive research on NAS, its security implications are largely unexplored. Recent work shows that NAS-generated models tend to be more vulnerable to various malicious attacks than manually designed ones (Pang et al., 2022; Devaguptapu et al., 2021). This work explores another dimension: whether it can be exploited as an attack vector to launch new attacks, which complements the existing studies on the security of NAS.
Backdoor attacks and defenses. Backdoor attacks inject malicious backdoors into the victim’s model during training and activate such backdoors at inference, which can be categorized along attack targets – input-specific (Shafahi et al., 2018), class-specific (Tang et al., 2020), or any-input (Gu et al., 2017), attack vectors – polluting training data (Liu et al., 2018) or releasing infected models (Ji et al., 2018), and optimization metrics – attack effectiveness (Pang et al., 2020), transferability (Yao et al., 2019), or attack evasiveness(Chen et al., 2017). To mitigate such threats, many defenses have also been proposed, which can be categorized according to their strategies (Pang et al., 2022): input filtering purges poisoning samples from training data (Tran et al., 2018); model inspection determines whether a given model is backdoored(Liu et al., 2019; Wang et al., 2019), and input inspection detects trigger inputs at inference time (Gao et al., 2019). Most attacks and defenses above focus on backdoors implemented in the space of model parameters. Concurrent to this work, Bober-Irizar et al. (2022) explore using neural arches to implement backdoors by manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training. This work investigates using NAS to directly search for arches with exploitable vulnerability, which represents a new direction of backdoor attacks.
3 EVAS
Next, we present EVAS, a new backdoor attack leveraging NAS to find neural arches with exploitable vulnerability. We begin by introducing the threat model.
3.1 THREAT MODEL
A backdoor attack injects a hidden malicious function (“backdoor”) into a target model (Pang et al., 2022). The backdoor is activated once a pre-defined condition (“trigger”) is present, while the model
Input Trigger Input
Malicious Model Malfunction
behaves normally otherwise. In a predictive task, the backdoor is often defined as classifying a given input to a class desired by the adversary, while the trigger can be defined as a specific perturbation applied to the input. Formally, given input x and trigger r = (m, p) in which m is a mask and p is a pattern, the trigger-embedded input is defined as:
x̃ = x⊙ (1−m) + p⊙m (1) Let f be the backdoor-infected model. The backdoor attack implies that for given input-label pair (x, y), f(x) = y and f(x̃) = t with high probability, where t is the adversary’s target class.
The conventional backdoor attacks typically follow two types of threat models: (i) the adversary directly trains a backdoor-embedded model, which is then released to and used by the victim user (Liu et al., 2018; Pang et al., 2020; Ji et al., 2018); or (ii) the adversary indirectly pollutes the training data or manipulate the training process (Gu et al., 2017; Qi et al., 2022) to inject the backdoor into the target model. As illustrated in Figure 1, in EVAS, we assume a more practical threat model in which the adversary only releases the exploitable arch to the user, who may choose to train the model from scratch using clean data or apply various defenses (e.g., model inspection or data filtering) before or during using the model. We believe this represents a more realistic setting: due to the prohibitive computational cost of NAS, users may opt to use performant model arches provided by third parties, which opens the door for the adversary to launch the EVAS attack.
However, realizing EVAS represents non-trivial challenges including (i) how to define the trigger patterns? (ii) how to define the exploitable, vulnerable arches? and (iii) how to search for such arches efficiently? Below we elaborate on each of these key questions.
3.2 INPUT-AWARE TRIGGERS
Most conventional backdoor attacks assume universal triggers: the same trigger is applied to all the inputs. However, universal triggers can be easily detected and mitigated by current defenses (Wang et al., 2019; Liu et al., 2019). Moreover, it is shown that implementing universal triggers at the arch level requires manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training (Bober-Irizar et al., 2022), which does not fit our threat model.
Instead, as illustrated in Figure 1, we adopt input-aware triggers (Nguyen & Tran, 2020), in which a trigger generator g (parameterized by ϑ) generates trigger rx specific to each input x. Compared with universal triggers, it is more challenging to detect or mitigate input-aware triggers. Interestingly, because of the modeling capacity of the trigger generator, it is more feasible to implement input-aware triggers at the arch level (details in § 4). For simplicity, below we use x̃ = g(x;ϑ) to denote both generating trigger rx for x and applying rx to x to generate the trigger-embedded input x̃.
3.3 EXPLOITABLE ARCHES
In EVAS, we aim to find arches with backdoors exploitable by the trigger generator, which we define as the following optimization problem.
Specifically, let α and θ respectively denote f ’s arch and model parameters. We define f ’s training as minimizing the following loss:
Ltrn(θ, α) ≜ E(x,y)∼Dℓ(fα(x; θ), y) (2) where fα denotes the model with arch fixed as α and D is the underlying data distribution. As θ is dependent on α, we define:
θα ≜ argmin θ Ltrn(θ, α) (3) Further, we define the backdoor attack objective as:
Latk(α, ϑ) ≜ E(x,y)∼D [ℓ(fα(x; θα), y) + λℓ(fα(g(x;ϑ); θα), t)] (4) where the first term specifies that f works normally on clean data, the second term specifies that f classifies trigger-embedded inputs to target class t, and the parameter λ balances the two factors. Note that we assume the testing data follows the same distribution D as the training data. Overall, we consider an arch α∗ having exploitable vulnerability if it is possible to find a trigger generator ϑ∗, such that Latk(α∗, ϑ∗) is below a certain threshold.
3.4 SEARCH WITHOUT TRAINING
Searching for exploitable archs by directly optimizing Eq. 4 is challenging: the nested optimization requires recomputing θ (i.e., re-training model f ) in Ltrn whenever α is updated; further, as α and ϑ are coupled in Latk, it requires re-training generator g once α is changed. Motivated by recent work (Mellor et al., 2021; Wu et al., 2021; Abdelfattah et al., 2021; Ning et al., 2021) on NAS using easy-to-compute metrics as proxies (without training), we present a novel method of searching for exploitable arches based on neural tangent kernel (NTK) (Jacot et al., 2018) without training the target model or trigger generator. Intuitively, NTK describes model training dynamics by gradient descent (Jacot et al., 2018; Chizat et al., 2019; Lee et al., 2019). In the limit of infinite-width DNNs, NTK becomes constant, which allows closed-form statements to be made about model training. Recent work (Chen et al., 2021; Mok et al., 2022) shows that NTK serves as an effective predictor of model “trainability” (i.e., how fast the model converges at early training stages). Formally, considering model f (parameterized by θ) mapping input x to a probability vector f(x; θ) (over different classes), the NTK is defined as the product of the Jacobian matrix:
Θ(x, θ) ≜
[ ∂f(x; θ)
∂θ
] [ ∂f(x; θ)
∂θ
]⊺ (5)
Let λmin (λmax) be the smallest (largest) eigenvalue of the empirical NTK Θ̂(θ) ≜ E(x,y)∼DΘ(x, θ). The condition number κ ≜ λmax/λmin serves as a metric to estimate model trainability (Chen et al., 2021), with a smaller conditional number indicating higher trainability.
In our context, we consider the trigger generator and the target model as an end-to-end model and measure the empirical NTK of the trigger generator under randomly initialized θ:
Θ̂(ϑ) ≜ E(x,y)∼D,θ∼Pθ◦
[ ∂f(g(x;ϑ); θ)
∂ϑ
] [ ∂f(g(x;ϑ; θ)
∂ϑ
]⊺ (6)
where Pθ◦ represents the initialization distribution of θ. Here, we emphasize that the measure should be independent of θ’s initialization.
Intuitively, Θ̂(ϑ) measures the trigger generator’s trainability with respect to a randomly initialized target model. The generator’s trainability indicates the easiness of effectively generating input-aware triggers, implying the model’s vulnerability to input-aware backdoor attacks. To verify the hypothesis, on the CIFAR10 dataset with the generator configured as in Appendix § A, we measure Θ̂(ϑ) with respect to 900 randomly generated arches as well as the model accuracy (ACC) on clean inputs and the attack success rate (ASR) on trigger-embedded inputs. Specifically, for each arch α, we first train the model fα to measure ACC and then train the trigger generator g with respect to fα on the same dataset to measure ASR, with results shown in Figure 2. Observe that the conditional number of Θ̂(ϑ) has a strong negative correlation with ASR, with a smaller value indicating higher attack vulnerability; meanwhile, it has a limited correlation with ACC, with most of the arches having ACC within the range from 80% to 95%.
Leveraging the insights above, we present a simple yet effective algorithm that searches for exploitable arches without training, which is a variant of regularized evolution (Real et al., 2019; Mellor et al., 2021). As sketched in Algorithm 1, it starts from a candidate pool A of n arches randomly sampled from a pre-defined arch space; at each iteration, it samples a subset A′ of m arches from A, randomly mutates the best candidate (i.e., with the lowest score), and replaces the oldest arch in A with this newly mutated arch. In our implementation, the score function is defined as the condition number of Eq. 6; the arch space is defined to be the NATS-Bench search space (Dong & Yang, 2020), which consists of 5 atomic operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3}; and the mutation function is defined to be randomly substituting one operator with another.
Algorithm 1: EVAS Attack Input: n – pool size; m – sample size; score – score function; sample – subset sampling function; mutate
– arch mutation function; Output: exploitable arch
1 A,S, T = [], [], [] ; // candidate archs, scores, timestamps 2 for i← 1 to n do 3 A[i]← randomly generated arch, S[i]← score(A[i]), T [i]← 0; 4 best← 0; 5 while maximum iterations not reached yet do 6 i← argmink∈sample(A,m) S[k] ; // best candidate 7 j ← argmaxk∈A T [k] ; // oldest candidate 8 A[j]← mutate(A[i]) ; // mutate candidate 9 S[j]← score(A[j]) ; // update score
10 T ← T + 1, T [j]← 0 ; // update timestamps 11 if S[j] < S[best] then best← j; 12 return A[best];
4 EVALUATION
We conduct an empirical evaluation of EVAS on benchmark datasets under various scenarios. The experiments are designed to answer the following key questions: (i) does it work? – we evaluate the performance and vulnerability of the arches identified by EVAS; (ii) how does it work? – we explore the dynamics of EVAS search as well as the characteristics of its identified arches; and (ii) how does it differ? – we compare EVAS with conventional backdoors in terms of attack evasiveness, transferability, and robustness.
4.1 EXPERIMENTAL SETTING
Datasets. In the evaluation, we primarily use three datasets that have been widely used to benchmark NAS methods (Chen et al., 2019; Li et al., 2020; Liu et al., 2019; Pham et al., 2018; Xie et al., 2019): CIFAR10 (Krizhevsky & Hinton, 2009), which consists of 32×32 color images drawn from 10 classes; CIFAR100, which is similar to CIFAR10 but includes 100 finer-grained classes; and
ImageNet16, which is a subset of the ImageNet dataset (Deng et al., 2009) down-sampled to images of size 16×16 in 120 classes. Search space. We consider the search space defined by NATS-Bench ( Dong et al. (2021)), which consists of 5 operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3} defined among 4 nodes, implying a search space of 15,625 candidate arches.
Baselines. We compare the arches found by EVAS with ResNet18 (He et al., 2016), a manually designed arch. For completeness, we also include two arches randomly sampled from the NATSBench space, which are illustrated in Figure 3. By default, for each arch α, we assume the adversary trains a model fα and then trains the trigger generator g with respect to fα on the same dataset. We consider varying settings in which the victim directly uses fα, fine-tunes fα, or only uses α and re-trains it from scratch (details in § 4.4).
Metrics. We mainly use two metrics, attack success rate (ASR) and clean data accuracy (ACC). Intuitively, ASR is the target model’s accuracy in classifying trigger inputs to the adversary’s target class during inference, which measures the attack effectiveness, while ACC is the target model’s accuracy in correctly classifying clean inputs, which measures the attack evasiveness.
The default parameter setting and the trigger generator configuration are deferred to Appendix § A.
4.2 Q1: DOES EVAS WORK?
Figure 3 illustrates one sample arch identified by EVAS on the CIFAR10 dataset. We use this arch throughout this set of experiments to show that its vulnerability is at the arch level and universal across datasets. To measure the vulnerability of different arches, we first train each arch using clean data, then train a trigger generator specific to this arch, and finally measure its ASR and ACC.
Table 1 reports the results. We have the following observations. First, the ASR of EVAS is significantly higher than ResNet18 and the other two random arches. For instance, on CIFAR10, EVAS is 21.8%, 28.3%, and 34.5% more effective than ResNet18 and random arches, respectively. Second, EVAS has the highest ASR across all the datasets. Recall that we use the same arch throughout different datasets. This indicates that the attack vulnerability probably resides at the arch level and is insensitive to concrete datasets, which corroborates with prior work on NAS: one performant arch found on one dataset often transfers across different datasets (Liu et al., 2019). This may be explained as follows. An arch α essentially defines a function family Fα, while a trained model fα(·; θ) is an instance in Fα, thereby carrying the characteristics of Fα (e.g., effective to extract important features or exploitable by a trigger generator). Third, all the arches show higher ASR on simpler datasets such as CIFAR10. This may be explained by that more complex datasets (e.g., more classes, higher resolution) imply more intricate manifold structures, which may interfere with arch-level backdoors.
Table 1. Model performance on clean inputs (ACC) and attack performance on trigger-embedded inputs (ASR) of EVAS, ResNet18, and two random arches.
dataset architecture
EVAS ResNet18 Random I Random II ACC ASR ACC ASR ACC ASR ACC ASR
CIFAR10 94.26% 81.51% 96.10% 59.73% 91.91% 53.21% 92.05% 47.04% CIFAR100 71.54% 60.97% 78.10% 53.53% 67.09% 42.41% 67.15% 47.17%
ImageNet16 45.92% 55.83% 47.62% 42.28% 39.33% 37.45% 39.48% 32.15%
To understand the attack effectiveness of EVAS on individual inputs, we illustrate sample clean inputs and their trigger-embedded variants in Figure 4. Further, using GradCam (Selvaraju et al., 2017), we show the model’s interpretation of clean and trigger inputs with respect to their original and target
classes. Observe that the trigger pattern is specific to each input. Further, even though the two trigger inputs are classified into the same target class, the difference in their heatmaps shows that the model pays attention to distinct features, highlighting the effects of input-aware triggers.
4.3 Q2: HOW DOES EVAS WORK?
Next, we explore the dynamics of how EVAS searches for exploitable arches. For simplicity, given the arch identified by EVAS in Figure 3, we consider the set of candidate arches with the operators on the 0-3 (skip connect) and 0-1 (conv 3×3) connections replaced by others. We measure the ACC and ASR of all these candidate arches and illustrate the landscape of their scores in Figure 5. Observe that the exploitable arch features the lowest score among the surrounding arches, suggesting the existence of feasible mutation paths from random arches to reach exploitable arches following the direction of score descent.
Further, we ask the question: what makes the arches found by EVAS exploitable? Observe that the arch in Figure 3 uses the conv 1×1 and 3×3 operators on a number of connections. We thus generate arches by enumerating all the possible combinations of conv 1×1 and 3×3 on these connections and measure their performance, with results summarized in Appendix § B. Observe that while all these arches show high ASR, their vulnerability varies greatly from about 50% to 90%. We hypothesize that specific combinations of conv 1×1 and conv 3×3 create arch-level “shortcuts” for recognizing trigger patterns. We consider exploring the causal relationships between concrete arch characteristics and attack vulnerability as our ongoing work.
4.4 Q3: HOW DOES EVAS DIFFER?
To further understand the difference between EVAS and conventional backdoors, we compare the arches found by EVAS and other arches under various training and defense scenarios.
Fine-tuning with clean data. We first consider the scenario in which, with the trigger generator fixed, the target model is fine-tuned using clean data (with concrete setting deferred to Appendix § A).
Table 2 shows the results evaluated on CIFAR10 and CIFAR100. Observe that fine-tuning has a marginal impact on the ASR of all the arches. Take Random I as an example, compared with Table 1, its ASR on CIFAR10 drops only by 7.40% after fine-tuning. This suggests that the effectiveness of fine-tuning to defend against input-aware backdoor attacks may be limited.
Re-training from scratch. Another common scenario is that the victim user re-initializes the target model and re-trains it from scratch using clean data. We simulate this scenario as follows. After the trigger generator and target model are trained, we fix the generator, randomly initialize (using different seeds) the model, and train it on the given dataset. Table 3 compares different arches under this scenario. It is observed that EVAS significantly outperforms ResNet18 and random arches in terms of ASR (with comparable ACC). For instance, it is 33.4%, 24.9%, and 19.6% more effective than the other arches respectively. This may be explained by two reasons. First, the arch-level backdoors in EVAS are inherently more agnostic to model re-training than the model-level backdoors in other arches. Second, in searching for exploitable arches, EVAS explicitly enforces that such vulnerability should be insensitive to model initialization (cf. Eq. 4). Further, observe that, as expected, re-training has a larger impact than fine-tuning on the ASR of different arches; however, it is still insufficient to mitigate input-aware backdoor attacks.
Fine-tuning with poisoning data. Further, we explore the setting in which the adversary is able to poison a tiny portion of the fine-tuning data, which assumes a stronger threat model. To simulate this scenario, we apply the trigger generator to generate trigger-embedded inputs and mix them with the clean fine-tuning data. Figure 6 illustrates the ASR and ACC of the target model as functions of the fraction of poisoning data in the fine-tuning dataset. Observe that, even with an extremely small poisoning ratio (e.g., 0.01%), it can significantly boost the ASR (e.g., 100%) while keeping the ACC unaffected. This indicates that arch-level backdoors can be greatly enhanced by combining with other attack vectors (e.g., data poisoning).
Backdoor defenses. Finally, we evaluate EVAS against three categories of defenses, model inspection, input filtering, and model sanitization.
Model inspection determines whether a given model f is infected with backdoors. We use NeuralCleanse (Wang et al., 2019) as a representative defense. Intuitively, it searches for potential triggers in each class. If a class is trigger-embedded, the minimum perturbation required to change the predictions of inputs from other classes to this class is abnormally small. It detects anomalies using median absolute deviation (MAD) and all classes with MAD scores larger than 2 are regarded as infected. As shown in Table 4, the MAD scores of EVAS’s target classes on three datasets are all below the threshold. This can be explained by that NeuralCleanse is built upon the universal trigger assumption, which does not hold for EVAS.
Input filtering detects at inference time whether an incoming input is embedded with a trigger. We use STRIP (Gao et al., 2019) as a representative defense in this category. It mixes a given input with a clean input and measures the self-entropy of its prediction. If the input is trigger-embedded, the mixture remains dominated by the trigger and tends to be misclassified, resulting in low self-entropy. However, as shown in Table 4, the AUROC scores of STRIP in classifying trigger-embedded inputs by EVAS are all close to random guess (i.e., 0.5). This can also be explained by that EVAS uses input-aware triggers, where each trigger only works for one specific input and has a limited impact on others.
Model sanitization, before using a given model, sanitizes it to mitigate the potential backdoors, yet without explicitly detecting whether the model is tampered. We use Fine-Pruning (Liu et al., 2018) as a representative. It uses the property that the backdoor attack typically exploits spare model capacity. It thus prunes rarely-used neurons and then applies fine-tuning to defend against pruning-aware attacks. We apply Fine-Pruning on the EVAS and ResNet18 models from Table 1, with results shown in Table 5. Observe that Fine-Pruning has a limited impact on the ASR of EVAS (even less than ResNet18). This may be explained as follows. The activation patterns of input-aware triggers are different from that of universal triggers, as each trigger may activate a different set of neurons. Moreover, the arch-level backdoors in EVAS may not concentrate on individual neurons but span over the whole model structures.
5 CONCLUSION
This work studies the feasibility of exploiting NAS as an attack vector to launch previously improbable attacks. We present a new backdoor attack that leverages NAS to efficiently find neural network architectures with inherent, exploitable vulnerability. Such architecture-level backdoors demonstrate many interesting properties including evasiveness, transferability, and robustness, thereby greatly expanding the design spectrum for the adversary. We believe our findings raise concerns about the current practice of NAS in security-sensitive domains and point to potential directions to develop effective mitigation.
ACKNOWLEDGMENTS
We thank anonymous reviewers and shepherd for valuable feedback. This work is partially supported by the National Science Foundation under Grant No. 2212323, 2119331, 1951729, and 1953893. Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation. S. Ji is partly supported by the National Key Research and Development Program of China under No. 2022YFB3102100, and NSFC under No. 62102360 and U1936215.
A EXPERIMENTAL SETTING
A.1 PARAMETER SETTING
Table 6 summarizes the default parameter setting.
A.2 GENERATOR ARCHITECTURE
Table 7 lists the architecture of the trigger generator.
B ADDITIONAL RESULTS
B.1 NTK OF TARGET MODEL
Here, we measure the NTK conditional number of the target model f under random initialization using the implementation of (Chen et al., 2021) and its corresponding ASR and ACC. Figure 7 shows their correlation. Observed that the NTK conditional number is negatively correlated with ACC (with Kendall’s coefficient τ = −0.385) and has a very weak correlation with ASR (with τ = 0.100), which is consistent with (Chen et al., 2021).
The difference between Figure 2 and Figure 7 can be explained as follows. Figure 2 measures the NTK conditional number κg of the trigger generator g (with respect to the randomly initialized target model f ), which indicates g’s trainability (or f ’s vulnerability). As backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. Therefore, κg shows a negative correlation with ASR and a weak positive correlation with ACC. Meanwhile, Figure 7 measures the
NTK conditional number κf of the target model f , which indicates f ’s trainability. Therefore, κf shows a negative correlation with ACC but a very weak correlation with ASR.
B.2 ASR-ACC TRADE-OFF
Figure 8 shows the correlation between the ASR and ACC of sampled arches (with Kendall’s coefficient τ = −0.390). Intuitively, as backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. This trade-off also implies that it is feasible to properly optimize ASR only to find performant but vulnerable arches.
40 60 80 100 ACC (%)
40
60
80
100
A SR
(% )
Figure 8: Trade-off between model performance (ACC) and vulnerability (ASR).
B.3 INTERPRETABILITY VERSUS VULNERABILITY
To understand the possible correlation between the attack vulnerability of an arch α and its interpretability, we compare the interpretation of each model fα regarding 100 clean inputs using GradCam (Selvaraju et al., 2017). Figure 9 illustrates sample inputs and their interpretation by different models.
Further, to quantitatively measure the similarity of interpretation, we use the intersection-over-union (IoU) score, which is widely used in object detection to compare model predictions with ground-truth bounding boxes. Formally, the IoU score of a binary-valued heatmap m with respect to another map m′ is defined as their Jaccard similarity:
IoU(m) = |O(m) ∩O(m′)| |O(m) ∪O(m′)| (7)
where O(m) denotes the set of non-zero elements in m. In our case, as the values of heatmaps are floating numbers, we first apply thresholding to binarize the values. Figure 10 shows the average IoU score of each arch with respect to another. Observe that (i) the arches generated by NAS (EVAS, Random I, and Random II) have more similar interpretability among themselves than manually
designed arches (ResNet-18); (ii) the arch with high vulnerability (EVAS) is not significantly different from the arch with low vulnerability (Random I, II) in terms of interpretability.
B.4 ABLATION OF ATTACK EVASIVENESS
The evasiveness of EVAS may be accounted by (i) input-dependent triggers (Nguyen & Tran, 2020) and (ii) arch-level vulnerability. Here, we explore the contribution of input-dependent triggers to the attack evasiveness. We train the trigger generator with respect to different arches (EVAS, ResNet18, random arches) and run NeuralCleanse and STRIP to detect the attacks, with results summarized in Table 8. Observed that while the concrete measures vary, all the attacks have MAD scores below the threshold and AUROC scores close to random guess, indicating that the input-dependent triggers mainly account for the attack evasiveness with respect to NeuralCleanse and STRIP.
B.5 IMPORTANCE OF CONV 1×1 AND CONV 3×3
We generate neighboring arches by enumerating all possible combinations of conv 1×1 and conv 3×3 on the connections of the arch identified by EVAS (“|{0} ∼ 0|+|{1} ∼ 0|{2} ∼ 1|+|skip_connect ∼ 0|{3} ∼ 1|{4} ∼ 2|”). The ASR and ACC of these arches are summarized in Table 9. | 1. What is the focus and contribution of the paper regarding NAS and backdooring?
2. What are the strengths of the proposed approach, particularly in its application of NTK?
3. What are the weaknesses of the paper, especially regarding the experimental setup and results?
4. Do you have any questions or concerns regarding the paper's methodology or conclusions?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper investigates using NAS to discover architectures that are easier to backdoor without requiring any poisoned training data (i.e., the setting is to just release the architecture and let the victim train it on clean data). A metric derived from NTK serves as the score, enabling efficient exploration of the search space. Experiments demonstrate that the approach can find architectures that are easier to backdoor across many datasets and that several existing defenses do not work (in part because the considered attacks are input-dependent).
Strengths And Weaknesses
Strengths:
The idea of the paper is interesting. It does seem plausible that NAS models will become more common in model sharing libraries, so preparing for this sort of attack vector seems prudent.
The experiment in Figure 2 nicely motivates Algorithm 1. NTK is used in an interesting way.
The results are promising and demonstrate that this is an interesting direction for future work.
Weaknesses:
The ASR is a bit low, especially in the retraining from scratch experiments (the most realistic setting). However, the discovered architecture does still improve over the other architectures, so this isn't a big weakness.
To what extent do the input-dependent triggers lead to the results on NC and STRIP? Other architectures are not shown, but they should be.
Questions:
I'm confused about the experimental setup in figure 2. Is it the case that the trigger generators are trained on the datasets, and then the networks are fine-tuned on the same datasets?
Clarity, Quality, Novelty And Reproducibility
The writing is very clear, and the paper is structured well. The novelty is more than sufficient. Reproducibility is sufficient. |
ICLR | Title
The Dark Side of AutoML: Towards Architectural Backdoor Search
Abstract
This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks? Specifically, we present EVAS, a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers. Compared with existing attacks, EVAS demonstrates many interesting properties: (i) it does not require polluting training data or perturbing model parameters; (ii) it is agnostic to downstream fine-tuning or even re-training from scratch; (iii) it naturally evades defenses that rely on inspecting model parameters or training data. With extensive evaluation on benchmark datasets, we show that EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary’s design spectrum. We further characterize the mechanisms underlying EVAS, which are possibly explainable by architecture-level “shortcuts” that recognize trigger patterns. This work showcases that NAS can be exploited in a harmful way to find architectures with inherent backdoor vulnerability. The code is available at https://github.com/ain-soph/nas_backdoor.
1 INTRODUCTION
As a new paradigm of applying ML techniques in practice, automated machine learning (AutoML) automates the pipeline from raw data to deployable models, which covers model design, optimizer selection, and parameter tuning. The use of AutoML greatly simplifies the ML development cycles and propels the trend of ML democratization. In particular, neural architecture search (NAS), one primary AutoML task, aims to find performant deep neural network (DNN) arches1 tailored to given datasets. In many cases, NAS is shown to find models remarkably outperforming manually designed ones (Pham et al., 2018; Liu et al., 2019; Li et al., 2020).
In contrast to the intensive research on improving the capability of NAS, its security implications are largely unexplored. As ML models are becoming the new targets of malicious attacks (Biggio & Roli, 2018), the lack of understanding about the risks of NAS is highly concerning, given its surging popularity in security-sensitive domains (Pang et al., 2022). Towards bridging this striking gap, we pose the intriguing yet critical question:
Is it possible for the adversary to exploit NAS to launch previously improbable attacks?
This work provides an affirmative answer to this question. We present exploitable and vulnerable arch search (EVAS), a new backdoor attack that leverages NAS to find neural arches with inherent, exploitable vulnerability. Conventional backdoor attacks typically embed the malicious functions (“backdoors”) into the space of model parameters. They often assume strong threat models, such as polluting training data (Gu et al., 2017; Liu et al., 2018; Pang et al., 2020) or perturbing model parameters (Ji et al., 2018; Qi et al., 2022), and are thus subject to defenses based on model inspection (Wang et al., 2019; Liu et al., 2019) and data filtering (Gao et al., 2019). In EVAS, however, as the backdoors are carried in the space of model arches, even if the victim trains the models using clean data and operates them in a black-box manner, the backdoors are still retained. Moreover, due
1In the following, we use “arch” for short of “architecture”.
to its independence of model parameters or training data, EVAS is naturally robust against defenses such as model inspection and input filtering.
To realize EVAS, we define a novel metric based on neural tangent kernel (Chen et al., 2021), which effectively indicates the exploitable vulnerability of a given arch; further, we integrate this metric into the NAS-without-training framework (Mellor et al., 2021; Chen et al., 2021). The resulting search method is able to efficiently identify candidate arches without requiring model training or backdoor testing. To verify EVAS’s empirical effectiveness, we evaluate EVAS on benchmark datasets and show: (i) EVAS successfully finds arches with exploitable vulnerability, (ii) the injected backdoors may be explained by arch-level “shortcuts” that recognize trigger patterns, and (iii) EVAS demonstrates high evasiveness, transferability, and robustness against defenses. Our findings show the feasibility of exploiting NAS as a new attack vector to implement previously improbable attacks, raise concerns about the current practice of NAS in security-sensitive domains, and point to potential directions to develop effective mitigation.
2 RELATED WORK
Next, we survey the literature relevant to this work.
Neural arch search. The existing NAS methods can be categorized along search space, search strategy, and performance measure. Search space – early methods focus on the chain-of-layer structure (Baker et al., 2017), while recent work proposes to search for motifs of cell structures (Zoph et al., 2018; Pham et al., 2018; Liu et al., 2019). Search strategy – early methods rely on either random search (Jozefowicz et al., 2015) or Bayesian optimization (Bergstra et al., 2013), which are limited in model complexity; recent work mainly uses the approaches of reinforcement learning (Baker et al., 2017) or neural evolution (Liu et al., 2019). Performance measure – one-shot NAS has emerged as a popular performance measure. It considers all candidate arches as different sub-graphs of a super-net (i.e., the one-shot model) and shares weights between candidate arches (Liu et al., 2019). Despite the intensive research on NAS, its security implications are largely unexplored. Recent work shows that NAS-generated models tend to be more vulnerable to various malicious attacks than manually designed ones (Pang et al., 2022; Devaguptapu et al., 2021). This work explores another dimension: whether it can be exploited as an attack vector to launch new attacks, which complements the existing studies on the security of NAS.
Backdoor attacks and defenses. Backdoor attacks inject malicious backdoors into the victim’s model during training and activate such backdoors at inference, which can be categorized along attack targets – input-specific (Shafahi et al., 2018), class-specific (Tang et al., 2020), or any-input (Gu et al., 2017), attack vectors – polluting training data (Liu et al., 2018) or releasing infected models (Ji et al., 2018), and optimization metrics – attack effectiveness (Pang et al., 2020), transferability (Yao et al., 2019), or attack evasiveness(Chen et al., 2017). To mitigate such threats, many defenses have also been proposed, which can be categorized according to their strategies (Pang et al., 2022): input filtering purges poisoning samples from training data (Tran et al., 2018); model inspection determines whether a given model is backdoored(Liu et al., 2019; Wang et al., 2019), and input inspection detects trigger inputs at inference time (Gao et al., 2019). Most attacks and defenses above focus on backdoors implemented in the space of model parameters. Concurrent to this work, Bober-Irizar et al. (2022) explore using neural arches to implement backdoors by manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training. This work investigates using NAS to directly search for arches with exploitable vulnerability, which represents a new direction of backdoor attacks.
3 EVAS
Next, we present EVAS, a new backdoor attack leveraging NAS to find neural arches with exploitable vulnerability. We begin by introducing the threat model.
3.1 THREAT MODEL
A backdoor attack injects a hidden malicious function (“backdoor”) into a target model (Pang et al., 2022). The backdoor is activated once a pre-defined condition (“trigger”) is present, while the model
Input Trigger Input
Malicious Model Malfunction
behaves normally otherwise. In a predictive task, the backdoor is often defined as classifying a given input to a class desired by the adversary, while the trigger can be defined as a specific perturbation applied to the input. Formally, given input x and trigger r = (m, p) in which m is a mask and p is a pattern, the trigger-embedded input is defined as:
x̃ = x⊙ (1−m) + p⊙m (1) Let f be the backdoor-infected model. The backdoor attack implies that for given input-label pair (x, y), f(x) = y and f(x̃) = t with high probability, where t is the adversary’s target class.
The conventional backdoor attacks typically follow two types of threat models: (i) the adversary directly trains a backdoor-embedded model, which is then released to and used by the victim user (Liu et al., 2018; Pang et al., 2020; Ji et al., 2018); or (ii) the adversary indirectly pollutes the training data or manipulate the training process (Gu et al., 2017; Qi et al., 2022) to inject the backdoor into the target model. As illustrated in Figure 1, in EVAS, we assume a more practical threat model in which the adversary only releases the exploitable arch to the user, who may choose to train the model from scratch using clean data or apply various defenses (e.g., model inspection or data filtering) before or during using the model. We believe this represents a more realistic setting: due to the prohibitive computational cost of NAS, users may opt to use performant model arches provided by third parties, which opens the door for the adversary to launch the EVAS attack.
However, realizing EVAS represents non-trivial challenges including (i) how to define the trigger patterns? (ii) how to define the exploitable, vulnerable arches? and (iii) how to search for such arches efficiently? Below we elaborate on each of these key questions.
3.2 INPUT-AWARE TRIGGERS
Most conventional backdoor attacks assume universal triggers: the same trigger is applied to all the inputs. However, universal triggers can be easily detected and mitigated by current defenses (Wang et al., 2019; Liu et al., 2019). Moreover, it is shown that implementing universal triggers at the arch level requires manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training (Bober-Irizar et al., 2022), which does not fit our threat model.
Instead, as illustrated in Figure 1, we adopt input-aware triggers (Nguyen & Tran, 2020), in which a trigger generator g (parameterized by ϑ) generates trigger rx specific to each input x. Compared with universal triggers, it is more challenging to detect or mitigate input-aware triggers. Interestingly, because of the modeling capacity of the trigger generator, it is more feasible to implement input-aware triggers at the arch level (details in § 4). For simplicity, below we use x̃ = g(x;ϑ) to denote both generating trigger rx for x and applying rx to x to generate the trigger-embedded input x̃.
3.3 EXPLOITABLE ARCHES
In EVAS, we aim to find arches with backdoors exploitable by the trigger generator, which we define as the following optimization problem.
Specifically, let α and θ respectively denote f ’s arch and model parameters. We define f ’s training as minimizing the following loss:
Ltrn(θ, α) ≜ E(x,y)∼Dℓ(fα(x; θ), y) (2) where fα denotes the model with arch fixed as α and D is the underlying data distribution. As θ is dependent on α, we define:
θα ≜ argmin θ Ltrn(θ, α) (3) Further, we define the backdoor attack objective as:
Latk(α, ϑ) ≜ E(x,y)∼D [ℓ(fα(x; θα), y) + λℓ(fα(g(x;ϑ); θα), t)] (4) where the first term specifies that f works normally on clean data, the second term specifies that f classifies trigger-embedded inputs to target class t, and the parameter λ balances the two factors. Note that we assume the testing data follows the same distribution D as the training data. Overall, we consider an arch α∗ having exploitable vulnerability if it is possible to find a trigger generator ϑ∗, such that Latk(α∗, ϑ∗) is below a certain threshold.
3.4 SEARCH WITHOUT TRAINING
Searching for exploitable archs by directly optimizing Eq. 4 is challenging: the nested optimization requires recomputing θ (i.e., re-training model f ) in Ltrn whenever α is updated; further, as α and ϑ are coupled in Latk, it requires re-training generator g once α is changed. Motivated by recent work (Mellor et al., 2021; Wu et al., 2021; Abdelfattah et al., 2021; Ning et al., 2021) on NAS using easy-to-compute metrics as proxies (without training), we present a novel method of searching for exploitable arches based on neural tangent kernel (NTK) (Jacot et al., 2018) without training the target model or trigger generator. Intuitively, NTK describes model training dynamics by gradient descent (Jacot et al., 2018; Chizat et al., 2019; Lee et al., 2019). In the limit of infinite-width DNNs, NTK becomes constant, which allows closed-form statements to be made about model training. Recent work (Chen et al., 2021; Mok et al., 2022) shows that NTK serves as an effective predictor of model “trainability” (i.e., how fast the model converges at early training stages). Formally, considering model f (parameterized by θ) mapping input x to a probability vector f(x; θ) (over different classes), the NTK is defined as the product of the Jacobian matrix:
Θ(x, θ) ≜
[ ∂f(x; θ)
∂θ
] [ ∂f(x; θ)
∂θ
]⊺ (5)
Let λmin (λmax) be the smallest (largest) eigenvalue of the empirical NTK Θ̂(θ) ≜ E(x,y)∼DΘ(x, θ). The condition number κ ≜ λmax/λmin serves as a metric to estimate model trainability (Chen et al., 2021), with a smaller conditional number indicating higher trainability.
In our context, we consider the trigger generator and the target model as an end-to-end model and measure the empirical NTK of the trigger generator under randomly initialized θ:
Θ̂(ϑ) ≜ E(x,y)∼D,θ∼Pθ◦
[ ∂f(g(x;ϑ); θ)
∂ϑ
] [ ∂f(g(x;ϑ; θ)
∂ϑ
]⊺ (6)
where Pθ◦ represents the initialization distribution of θ. Here, we emphasize that the measure should be independent of θ’s initialization.
Intuitively, Θ̂(ϑ) measures the trigger generator’s trainability with respect to a randomly initialized target model. The generator’s trainability indicates the easiness of effectively generating input-aware triggers, implying the model’s vulnerability to input-aware backdoor attacks. To verify the hypothesis, on the CIFAR10 dataset with the generator configured as in Appendix § A, we measure Θ̂(ϑ) with respect to 900 randomly generated arches as well as the model accuracy (ACC) on clean inputs and the attack success rate (ASR) on trigger-embedded inputs. Specifically, for each arch α, we first train the model fα to measure ACC and then train the trigger generator g with respect to fα on the same dataset to measure ASR, with results shown in Figure 2. Observe that the conditional number of Θ̂(ϑ) has a strong negative correlation with ASR, with a smaller value indicating higher attack vulnerability; meanwhile, it has a limited correlation with ACC, with most of the arches having ACC within the range from 80% to 95%.
Leveraging the insights above, we present a simple yet effective algorithm that searches for exploitable arches without training, which is a variant of regularized evolution (Real et al., 2019; Mellor et al., 2021). As sketched in Algorithm 1, it starts from a candidate pool A of n arches randomly sampled from a pre-defined arch space; at each iteration, it samples a subset A′ of m arches from A, randomly mutates the best candidate (i.e., with the lowest score), and replaces the oldest arch in A with this newly mutated arch. In our implementation, the score function is defined as the condition number of Eq. 6; the arch space is defined to be the NATS-Bench search space (Dong & Yang, 2020), which consists of 5 atomic operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3}; and the mutation function is defined to be randomly substituting one operator with another.
Algorithm 1: EVAS Attack Input: n – pool size; m – sample size; score – score function; sample – subset sampling function; mutate
– arch mutation function; Output: exploitable arch
1 A,S, T = [], [], [] ; // candidate archs, scores, timestamps 2 for i← 1 to n do 3 A[i]← randomly generated arch, S[i]← score(A[i]), T [i]← 0; 4 best← 0; 5 while maximum iterations not reached yet do 6 i← argmink∈sample(A,m) S[k] ; // best candidate 7 j ← argmaxk∈A T [k] ; // oldest candidate 8 A[j]← mutate(A[i]) ; // mutate candidate 9 S[j]← score(A[j]) ; // update score
10 T ← T + 1, T [j]← 0 ; // update timestamps 11 if S[j] < S[best] then best← j; 12 return A[best];
4 EVALUATION
We conduct an empirical evaluation of EVAS on benchmark datasets under various scenarios. The experiments are designed to answer the following key questions: (i) does it work? – we evaluate the performance and vulnerability of the arches identified by EVAS; (ii) how does it work? – we explore the dynamics of EVAS search as well as the characteristics of its identified arches; and (ii) how does it differ? – we compare EVAS with conventional backdoors in terms of attack evasiveness, transferability, and robustness.
4.1 EXPERIMENTAL SETTING
Datasets. In the evaluation, we primarily use three datasets that have been widely used to benchmark NAS methods (Chen et al., 2019; Li et al., 2020; Liu et al., 2019; Pham et al., 2018; Xie et al., 2019): CIFAR10 (Krizhevsky & Hinton, 2009), which consists of 32×32 color images drawn from 10 classes; CIFAR100, which is similar to CIFAR10 but includes 100 finer-grained classes; and
ImageNet16, which is a subset of the ImageNet dataset (Deng et al., 2009) down-sampled to images of size 16×16 in 120 classes. Search space. We consider the search space defined by NATS-Bench ( Dong et al. (2021)), which consists of 5 operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3} defined among 4 nodes, implying a search space of 15,625 candidate arches.
Baselines. We compare the arches found by EVAS with ResNet18 (He et al., 2016), a manually designed arch. For completeness, we also include two arches randomly sampled from the NATSBench space, which are illustrated in Figure 3. By default, for each arch α, we assume the adversary trains a model fα and then trains the trigger generator g with respect to fα on the same dataset. We consider varying settings in which the victim directly uses fα, fine-tunes fα, or only uses α and re-trains it from scratch (details in § 4.4).
Metrics. We mainly use two metrics, attack success rate (ASR) and clean data accuracy (ACC). Intuitively, ASR is the target model’s accuracy in classifying trigger inputs to the adversary’s target class during inference, which measures the attack effectiveness, while ACC is the target model’s accuracy in correctly classifying clean inputs, which measures the attack evasiveness.
The default parameter setting and the trigger generator configuration are deferred to Appendix § A.
4.2 Q1: DOES EVAS WORK?
Figure 3 illustrates one sample arch identified by EVAS on the CIFAR10 dataset. We use this arch throughout this set of experiments to show that its vulnerability is at the arch level and universal across datasets. To measure the vulnerability of different arches, we first train each arch using clean data, then train a trigger generator specific to this arch, and finally measure its ASR and ACC.
Table 1 reports the results. We have the following observations. First, the ASR of EVAS is significantly higher than ResNet18 and the other two random arches. For instance, on CIFAR10, EVAS is 21.8%, 28.3%, and 34.5% more effective than ResNet18 and random arches, respectively. Second, EVAS has the highest ASR across all the datasets. Recall that we use the same arch throughout different datasets. This indicates that the attack vulnerability probably resides at the arch level and is insensitive to concrete datasets, which corroborates with prior work on NAS: one performant arch found on one dataset often transfers across different datasets (Liu et al., 2019). This may be explained as follows. An arch α essentially defines a function family Fα, while a trained model fα(·; θ) is an instance in Fα, thereby carrying the characteristics of Fα (e.g., effective to extract important features or exploitable by a trigger generator). Third, all the arches show higher ASR on simpler datasets such as CIFAR10. This may be explained by that more complex datasets (e.g., more classes, higher resolution) imply more intricate manifold structures, which may interfere with arch-level backdoors.
Table 1. Model performance on clean inputs (ACC) and attack performance on trigger-embedded inputs (ASR) of EVAS, ResNet18, and two random arches.
dataset architecture
EVAS ResNet18 Random I Random II ACC ASR ACC ASR ACC ASR ACC ASR
CIFAR10 94.26% 81.51% 96.10% 59.73% 91.91% 53.21% 92.05% 47.04% CIFAR100 71.54% 60.97% 78.10% 53.53% 67.09% 42.41% 67.15% 47.17%
ImageNet16 45.92% 55.83% 47.62% 42.28% 39.33% 37.45% 39.48% 32.15%
To understand the attack effectiveness of EVAS on individual inputs, we illustrate sample clean inputs and their trigger-embedded variants in Figure 4. Further, using GradCam (Selvaraju et al., 2017), we show the model’s interpretation of clean and trigger inputs with respect to their original and target
classes. Observe that the trigger pattern is specific to each input. Further, even though the two trigger inputs are classified into the same target class, the difference in their heatmaps shows that the model pays attention to distinct features, highlighting the effects of input-aware triggers.
4.3 Q2: HOW DOES EVAS WORK?
Next, we explore the dynamics of how EVAS searches for exploitable arches. For simplicity, given the arch identified by EVAS in Figure 3, we consider the set of candidate arches with the operators on the 0-3 (skip connect) and 0-1 (conv 3×3) connections replaced by others. We measure the ACC and ASR of all these candidate arches and illustrate the landscape of their scores in Figure 5. Observe that the exploitable arch features the lowest score among the surrounding arches, suggesting the existence of feasible mutation paths from random arches to reach exploitable arches following the direction of score descent.
Further, we ask the question: what makes the arches found by EVAS exploitable? Observe that the arch in Figure 3 uses the conv 1×1 and 3×3 operators on a number of connections. We thus generate arches by enumerating all the possible combinations of conv 1×1 and 3×3 on these connections and measure their performance, with results summarized in Appendix § B. Observe that while all these arches show high ASR, their vulnerability varies greatly from about 50% to 90%. We hypothesize that specific combinations of conv 1×1 and conv 3×3 create arch-level “shortcuts” for recognizing trigger patterns. We consider exploring the causal relationships between concrete arch characteristics and attack vulnerability as our ongoing work.
4.4 Q3: HOW DOES EVAS DIFFER?
To further understand the difference between EVAS and conventional backdoors, we compare the arches found by EVAS and other arches under various training and defense scenarios.
Fine-tuning with clean data. We first consider the scenario in which, with the trigger generator fixed, the target model is fine-tuned using clean data (with concrete setting deferred to Appendix § A).
Table 2 shows the results evaluated on CIFAR10 and CIFAR100. Observe that fine-tuning has a marginal impact on the ASR of all the arches. Take Random I as an example, compared with Table 1, its ASR on CIFAR10 drops only by 7.40% after fine-tuning. This suggests that the effectiveness of fine-tuning to defend against input-aware backdoor attacks may be limited.
Re-training from scratch. Another common scenario is that the victim user re-initializes the target model and re-trains it from scratch using clean data. We simulate this scenario as follows. After the trigger generator and target model are trained, we fix the generator, randomly initialize (using different seeds) the model, and train it on the given dataset. Table 3 compares different arches under this scenario. It is observed that EVAS significantly outperforms ResNet18 and random arches in terms of ASR (with comparable ACC). For instance, it is 33.4%, 24.9%, and 19.6% more effective than the other arches respectively. This may be explained by two reasons. First, the arch-level backdoors in EVAS are inherently more agnostic to model re-training than the model-level backdoors in other arches. Second, in searching for exploitable arches, EVAS explicitly enforces that such vulnerability should be insensitive to model initialization (cf. Eq. 4). Further, observe that, as expected, re-training has a larger impact than fine-tuning on the ASR of different arches; however, it is still insufficient to mitigate input-aware backdoor attacks.
Fine-tuning with poisoning data. Further, we explore the setting in which the adversary is able to poison a tiny portion of the fine-tuning data, which assumes a stronger threat model. To simulate this scenario, we apply the trigger generator to generate trigger-embedded inputs and mix them with the clean fine-tuning data. Figure 6 illustrates the ASR and ACC of the target model as functions of the fraction of poisoning data in the fine-tuning dataset. Observe that, even with an extremely small poisoning ratio (e.g., 0.01%), it can significantly boost the ASR (e.g., 100%) while keeping the ACC unaffected. This indicates that arch-level backdoors can be greatly enhanced by combining with other attack vectors (e.g., data poisoning).
Backdoor defenses. Finally, we evaluate EVAS against three categories of defenses, model inspection, input filtering, and model sanitization.
Model inspection determines whether a given model f is infected with backdoors. We use NeuralCleanse (Wang et al., 2019) as a representative defense. Intuitively, it searches for potential triggers in each class. If a class is trigger-embedded, the minimum perturbation required to change the predictions of inputs from other classes to this class is abnormally small. It detects anomalies using median absolute deviation (MAD) and all classes with MAD scores larger than 2 are regarded as infected. As shown in Table 4, the MAD scores of EVAS’s target classes on three datasets are all below the threshold. This can be explained by that NeuralCleanse is built upon the universal trigger assumption, which does not hold for EVAS.
Input filtering detects at inference time whether an incoming input is embedded with a trigger. We use STRIP (Gao et al., 2019) as a representative defense in this category. It mixes a given input with a clean input and measures the self-entropy of its prediction. If the input is trigger-embedded, the mixture remains dominated by the trigger and tends to be misclassified, resulting in low self-entropy. However, as shown in Table 4, the AUROC scores of STRIP in classifying trigger-embedded inputs by EVAS are all close to random guess (i.e., 0.5). This can also be explained by that EVAS uses input-aware triggers, where each trigger only works for one specific input and has a limited impact on others.
Model sanitization, before using a given model, sanitizes it to mitigate the potential backdoors, yet without explicitly detecting whether the model is tampered. We use Fine-Pruning (Liu et al., 2018) as a representative. It uses the property that the backdoor attack typically exploits spare model capacity. It thus prunes rarely-used neurons and then applies fine-tuning to defend against pruning-aware attacks. We apply Fine-Pruning on the EVAS and ResNet18 models from Table 1, with results shown in Table 5. Observe that Fine-Pruning has a limited impact on the ASR of EVAS (even less than ResNet18). This may be explained as follows. The activation patterns of input-aware triggers are different from that of universal triggers, as each trigger may activate a different set of neurons. Moreover, the arch-level backdoors in EVAS may not concentrate on individual neurons but span over the whole model structures.
5 CONCLUSION
This work studies the feasibility of exploiting NAS as an attack vector to launch previously improbable attacks. We present a new backdoor attack that leverages NAS to efficiently find neural network architectures with inherent, exploitable vulnerability. Such architecture-level backdoors demonstrate many interesting properties including evasiveness, transferability, and robustness, thereby greatly expanding the design spectrum for the adversary. We believe our findings raise concerns about the current practice of NAS in security-sensitive domains and point to potential directions to develop effective mitigation.
ACKNOWLEDGMENTS
We thank anonymous reviewers and shepherd for valuable feedback. This work is partially supported by the National Science Foundation under Grant No. 2212323, 2119331, 1951729, and 1953893. Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation. S. Ji is partly supported by the National Key Research and Development Program of China under No. 2022YFB3102100, and NSFC under No. 62102360 and U1936215.
A EXPERIMENTAL SETTING
A.1 PARAMETER SETTING
Table 6 summarizes the default parameter setting.
A.2 GENERATOR ARCHITECTURE
Table 7 lists the architecture of the trigger generator.
B ADDITIONAL RESULTS
B.1 NTK OF TARGET MODEL
Here, we measure the NTK conditional number of the target model f under random initialization using the implementation of (Chen et al., 2021) and its corresponding ASR and ACC. Figure 7 shows their correlation. Observed that the NTK conditional number is negatively correlated with ACC (with Kendall’s coefficient τ = −0.385) and has a very weak correlation with ASR (with τ = 0.100), which is consistent with (Chen et al., 2021).
The difference between Figure 2 and Figure 7 can be explained as follows. Figure 2 measures the NTK conditional number κg of the trigger generator g (with respect to the randomly initialized target model f ), which indicates g’s trainability (or f ’s vulnerability). As backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. Therefore, κg shows a negative correlation with ASR and a weak positive correlation with ACC. Meanwhile, Figure 7 measures the
NTK conditional number κf of the target model f , which indicates f ’s trainability. Therefore, κf shows a negative correlation with ACC but a very weak correlation with ASR.
B.2 ASR-ACC TRADE-OFF
Figure 8 shows the correlation between the ASR and ACC of sampled arches (with Kendall’s coefficient τ = −0.390). Intuitively, as backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. This trade-off also implies that it is feasible to properly optimize ASR only to find performant but vulnerable arches.
40 60 80 100 ACC (%)
40
60
80
100
A SR
(% )
Figure 8: Trade-off between model performance (ACC) and vulnerability (ASR).
B.3 INTERPRETABILITY VERSUS VULNERABILITY
To understand the possible correlation between the attack vulnerability of an arch α and its interpretability, we compare the interpretation of each model fα regarding 100 clean inputs using GradCam (Selvaraju et al., 2017). Figure 9 illustrates sample inputs and their interpretation by different models.
Further, to quantitatively measure the similarity of interpretation, we use the intersection-over-union (IoU) score, which is widely used in object detection to compare model predictions with ground-truth bounding boxes. Formally, the IoU score of a binary-valued heatmap m with respect to another map m′ is defined as their Jaccard similarity:
IoU(m) = |O(m) ∩O(m′)| |O(m) ∪O(m′)| (7)
where O(m) denotes the set of non-zero elements in m. In our case, as the values of heatmaps are floating numbers, we first apply thresholding to binarize the values. Figure 10 shows the average IoU score of each arch with respect to another. Observe that (i) the arches generated by NAS (EVAS, Random I, and Random II) have more similar interpretability among themselves than manually
designed arches (ResNet-18); (ii) the arch with high vulnerability (EVAS) is not significantly different from the arch with low vulnerability (Random I, II) in terms of interpretability.
B.4 ABLATION OF ATTACK EVASIVENESS
The evasiveness of EVAS may be accounted by (i) input-dependent triggers (Nguyen & Tran, 2020) and (ii) arch-level vulnerability. Here, we explore the contribution of input-dependent triggers to the attack evasiveness. We train the trigger generator with respect to different arches (EVAS, ResNet18, random arches) and run NeuralCleanse and STRIP to detect the attacks, with results summarized in Table 8. Observed that while the concrete measures vary, all the attacks have MAD scores below the threshold and AUROC scores close to random guess, indicating that the input-dependent triggers mainly account for the attack evasiveness with respect to NeuralCleanse and STRIP.
B.5 IMPORTANCE OF CONV 1×1 AND CONV 3×3
We generate neighboring arches by enumerating all possible combinations of conv 1×1 and conv 3×3 on the connections of the arch identified by EVAS (“|{0} ∼ 0|+|{1} ∼ 0|{2} ∼ 1|+|skip_connect ∼ 0|{3} ∼ 1|{4} ∼ 2|”). The ASR and ACC of these arches are summarized in Table 9. | 1. What is the focus of the paper regarding backdoor attacks in deep learning?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novelty and effectiveness?
3. Do you have any concerns or questions about the experimental results and ablation studies presented in the paper?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any suggestions or recommendations for improving the paper or its contributions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes one new way to conduct back door attacks: through neural architecture search to find architectures that are vulnerable to certain triggers. The proposed EVAS is unaware of training data and model parameters so naturally evades defenses that rely on training data filtering or model parameters inspection.
Strengths And Weaknesses
Strength:
I think this paper may be valuable to the community. Utilizing NAS as a way to launch attacks is quite interesting and the authors properly explain this by architecture-level “shortcuts” that recognize trigger patterns.
The paper is well-motivated and clearly written. Experiments and ablation studies are well performed.
Weakness:
The right figure of Figure 2 is mysterious to me. It seems that the right figure indicates that the model accuracy arises as the conditional number of NTK arises, which contradicts the result in [1]. I understand this is the NTK w.r.t trigger generator parameters instead of the model parameters but still, I would like to see some more discussion on this phenomenon. BTW., I recommend reporting Pearson correlation between ASR., Acc. and the conditional number of the NTK.
It's also unclear why the architectures found on cifar-10 work well on multiple datasets. This is an interesting observation. More discussion on why there exists universal vulnerable architecture would further strengthen the paper.
The ingredients of EVAS are not that novel. The input-aware triggers, NTK-based zero-cost proxy as well as the searching algorithms are all proposed in previous works. The fact that certain architectures are more vulnerable is also elaborated in [2]. Nevertheless, this paper made good efforts to combine them all.
It is not clear to me if the parameters in g(.) will be trained. Will a trained g(.) be used to launch attack?
References: [1] W. Chen et.al. NEURAL ARCHITECTURE SEARCH ON IMAGENET IN FOUR GPU HOURS: A THEORETICALLY INSPIRED PERSPECTIVE. ICLR’21. [2] M. Bober-Irizar et.al. Architectural Backdoors in Neural Networks. arXiv preprint: arXiv.2206.07840
Clarity, Quality, Novelty And Reproducibility
The paper is well-written, and the proposed method is somewhat novel. |
ICLR | Title
The Dark Side of AutoML: Towards Architectural Backdoor Search
Abstract
This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks? Specifically, we present EVAS, a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers. Compared with existing attacks, EVAS demonstrates many interesting properties: (i) it does not require polluting training data or perturbing model parameters; (ii) it is agnostic to downstream fine-tuning or even re-training from scratch; (iii) it naturally evades defenses that rely on inspecting model parameters or training data. With extensive evaluation on benchmark datasets, we show that EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary’s design spectrum. We further characterize the mechanisms underlying EVAS, which are possibly explainable by architecture-level “shortcuts” that recognize trigger patterns. This work showcases that NAS can be exploited in a harmful way to find architectures with inherent backdoor vulnerability. The code is available at https://github.com/ain-soph/nas_backdoor.
1 INTRODUCTION
As a new paradigm of applying ML techniques in practice, automated machine learning (AutoML) automates the pipeline from raw data to deployable models, which covers model design, optimizer selection, and parameter tuning. The use of AutoML greatly simplifies the ML development cycles and propels the trend of ML democratization. In particular, neural architecture search (NAS), one primary AutoML task, aims to find performant deep neural network (DNN) arches1 tailored to given datasets. In many cases, NAS is shown to find models remarkably outperforming manually designed ones (Pham et al., 2018; Liu et al., 2019; Li et al., 2020).
In contrast to the intensive research on improving the capability of NAS, its security implications are largely unexplored. As ML models are becoming the new targets of malicious attacks (Biggio & Roli, 2018), the lack of understanding about the risks of NAS is highly concerning, given its surging popularity in security-sensitive domains (Pang et al., 2022). Towards bridging this striking gap, we pose the intriguing yet critical question:
Is it possible for the adversary to exploit NAS to launch previously improbable attacks?
This work provides an affirmative answer to this question. We present exploitable and vulnerable arch search (EVAS), a new backdoor attack that leverages NAS to find neural arches with inherent, exploitable vulnerability. Conventional backdoor attacks typically embed the malicious functions (“backdoors”) into the space of model parameters. They often assume strong threat models, such as polluting training data (Gu et al., 2017; Liu et al., 2018; Pang et al., 2020) or perturbing model parameters (Ji et al., 2018; Qi et al., 2022), and are thus subject to defenses based on model inspection (Wang et al., 2019; Liu et al., 2019) and data filtering (Gao et al., 2019). In EVAS, however, as the backdoors are carried in the space of model arches, even if the victim trains the models using clean data and operates them in a black-box manner, the backdoors are still retained. Moreover, due
1In the following, we use “arch” for short of “architecture”.
to its independence of model parameters or training data, EVAS is naturally robust against defenses such as model inspection and input filtering.
To realize EVAS, we define a novel metric based on neural tangent kernel (Chen et al., 2021), which effectively indicates the exploitable vulnerability of a given arch; further, we integrate this metric into the NAS-without-training framework (Mellor et al., 2021; Chen et al., 2021). The resulting search method is able to efficiently identify candidate arches without requiring model training or backdoor testing. To verify EVAS’s empirical effectiveness, we evaluate EVAS on benchmark datasets and show: (i) EVAS successfully finds arches with exploitable vulnerability, (ii) the injected backdoors may be explained by arch-level “shortcuts” that recognize trigger patterns, and (iii) EVAS demonstrates high evasiveness, transferability, and robustness against defenses. Our findings show the feasibility of exploiting NAS as a new attack vector to implement previously improbable attacks, raise concerns about the current practice of NAS in security-sensitive domains, and point to potential directions to develop effective mitigation.
2 RELATED WORK
Next, we survey the literature relevant to this work.
Neural arch search. The existing NAS methods can be categorized along search space, search strategy, and performance measure. Search space – early methods focus on the chain-of-layer structure (Baker et al., 2017), while recent work proposes to search for motifs of cell structures (Zoph et al., 2018; Pham et al., 2018; Liu et al., 2019). Search strategy – early methods rely on either random search (Jozefowicz et al., 2015) or Bayesian optimization (Bergstra et al., 2013), which are limited in model complexity; recent work mainly uses the approaches of reinforcement learning (Baker et al., 2017) or neural evolution (Liu et al., 2019). Performance measure – one-shot NAS has emerged as a popular performance measure. It considers all candidate arches as different sub-graphs of a super-net (i.e., the one-shot model) and shares weights between candidate arches (Liu et al., 2019). Despite the intensive research on NAS, its security implications are largely unexplored. Recent work shows that NAS-generated models tend to be more vulnerable to various malicious attacks than manually designed ones (Pang et al., 2022; Devaguptapu et al., 2021). This work explores another dimension: whether it can be exploited as an attack vector to launch new attacks, which complements the existing studies on the security of NAS.
Backdoor attacks and defenses. Backdoor attacks inject malicious backdoors into the victim’s model during training and activate such backdoors at inference, which can be categorized along attack targets – input-specific (Shafahi et al., 2018), class-specific (Tang et al., 2020), or any-input (Gu et al., 2017), attack vectors – polluting training data (Liu et al., 2018) or releasing infected models (Ji et al., 2018), and optimization metrics – attack effectiveness (Pang et al., 2020), transferability (Yao et al., 2019), or attack evasiveness(Chen et al., 2017). To mitigate such threats, many defenses have also been proposed, which can be categorized according to their strategies (Pang et al., 2022): input filtering purges poisoning samples from training data (Tran et al., 2018); model inspection determines whether a given model is backdoored(Liu et al., 2019; Wang et al., 2019), and input inspection detects trigger inputs at inference time (Gao et al., 2019). Most attacks and defenses above focus on backdoors implemented in the space of model parameters. Concurrent to this work, Bober-Irizar et al. (2022) explore using neural arches to implement backdoors by manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training. This work investigates using NAS to directly search for arches with exploitable vulnerability, which represents a new direction of backdoor attacks.
3 EVAS
Next, we present EVAS, a new backdoor attack leveraging NAS to find neural arches with exploitable vulnerability. We begin by introducing the threat model.
3.1 THREAT MODEL
A backdoor attack injects a hidden malicious function (“backdoor”) into a target model (Pang et al., 2022). The backdoor is activated once a pre-defined condition (“trigger”) is present, while the model
Input Trigger Input
Malicious Model Malfunction
behaves normally otherwise. In a predictive task, the backdoor is often defined as classifying a given input to a class desired by the adversary, while the trigger can be defined as a specific perturbation applied to the input. Formally, given input x and trigger r = (m, p) in which m is a mask and p is a pattern, the trigger-embedded input is defined as:
x̃ = x⊙ (1−m) + p⊙m (1) Let f be the backdoor-infected model. The backdoor attack implies that for given input-label pair (x, y), f(x) = y and f(x̃) = t with high probability, where t is the adversary’s target class.
The conventional backdoor attacks typically follow two types of threat models: (i) the adversary directly trains a backdoor-embedded model, which is then released to and used by the victim user (Liu et al., 2018; Pang et al., 2020; Ji et al., 2018); or (ii) the adversary indirectly pollutes the training data or manipulate the training process (Gu et al., 2017; Qi et al., 2022) to inject the backdoor into the target model. As illustrated in Figure 1, in EVAS, we assume a more practical threat model in which the adversary only releases the exploitable arch to the user, who may choose to train the model from scratch using clean data or apply various defenses (e.g., model inspection or data filtering) before or during using the model. We believe this represents a more realistic setting: due to the prohibitive computational cost of NAS, users may opt to use performant model arches provided by third parties, which opens the door for the adversary to launch the EVAS attack.
However, realizing EVAS represents non-trivial challenges including (i) how to define the trigger patterns? (ii) how to define the exploitable, vulnerable arches? and (iii) how to search for such arches efficiently? Below we elaborate on each of these key questions.
3.2 INPUT-AWARE TRIGGERS
Most conventional backdoor attacks assume universal triggers: the same trigger is applied to all the inputs. However, universal triggers can be easily detected and mitigated by current defenses (Wang et al., 2019; Liu et al., 2019). Moreover, it is shown that implementing universal triggers at the arch level requires manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training (Bober-Irizar et al., 2022), which does not fit our threat model.
Instead, as illustrated in Figure 1, we adopt input-aware triggers (Nguyen & Tran, 2020), in which a trigger generator g (parameterized by ϑ) generates trigger rx specific to each input x. Compared with universal triggers, it is more challenging to detect or mitigate input-aware triggers. Interestingly, because of the modeling capacity of the trigger generator, it is more feasible to implement input-aware triggers at the arch level (details in § 4). For simplicity, below we use x̃ = g(x;ϑ) to denote both generating trigger rx for x and applying rx to x to generate the trigger-embedded input x̃.
3.3 EXPLOITABLE ARCHES
In EVAS, we aim to find arches with backdoors exploitable by the trigger generator, which we define as the following optimization problem.
Specifically, let α and θ respectively denote f ’s arch and model parameters. We define f ’s training as minimizing the following loss:
Ltrn(θ, α) ≜ E(x,y)∼Dℓ(fα(x; θ), y) (2) where fα denotes the model with arch fixed as α and D is the underlying data distribution. As θ is dependent on α, we define:
θα ≜ argmin θ Ltrn(θ, α) (3) Further, we define the backdoor attack objective as:
Latk(α, ϑ) ≜ E(x,y)∼D [ℓ(fα(x; θα), y) + λℓ(fα(g(x;ϑ); θα), t)] (4) where the first term specifies that f works normally on clean data, the second term specifies that f classifies trigger-embedded inputs to target class t, and the parameter λ balances the two factors. Note that we assume the testing data follows the same distribution D as the training data. Overall, we consider an arch α∗ having exploitable vulnerability if it is possible to find a trigger generator ϑ∗, such that Latk(α∗, ϑ∗) is below a certain threshold.
3.4 SEARCH WITHOUT TRAINING
Searching for exploitable archs by directly optimizing Eq. 4 is challenging: the nested optimization requires recomputing θ (i.e., re-training model f ) in Ltrn whenever α is updated; further, as α and ϑ are coupled in Latk, it requires re-training generator g once α is changed. Motivated by recent work (Mellor et al., 2021; Wu et al., 2021; Abdelfattah et al., 2021; Ning et al., 2021) on NAS using easy-to-compute metrics as proxies (without training), we present a novel method of searching for exploitable arches based on neural tangent kernel (NTK) (Jacot et al., 2018) without training the target model or trigger generator. Intuitively, NTK describes model training dynamics by gradient descent (Jacot et al., 2018; Chizat et al., 2019; Lee et al., 2019). In the limit of infinite-width DNNs, NTK becomes constant, which allows closed-form statements to be made about model training. Recent work (Chen et al., 2021; Mok et al., 2022) shows that NTK serves as an effective predictor of model “trainability” (i.e., how fast the model converges at early training stages). Formally, considering model f (parameterized by θ) mapping input x to a probability vector f(x; θ) (over different classes), the NTK is defined as the product of the Jacobian matrix:
Θ(x, θ) ≜
[ ∂f(x; θ)
∂θ
] [ ∂f(x; θ)
∂θ
]⊺ (5)
Let λmin (λmax) be the smallest (largest) eigenvalue of the empirical NTK Θ̂(θ) ≜ E(x,y)∼DΘ(x, θ). The condition number κ ≜ λmax/λmin serves as a metric to estimate model trainability (Chen et al., 2021), with a smaller conditional number indicating higher trainability.
In our context, we consider the trigger generator and the target model as an end-to-end model and measure the empirical NTK of the trigger generator under randomly initialized θ:
Θ̂(ϑ) ≜ E(x,y)∼D,θ∼Pθ◦
[ ∂f(g(x;ϑ); θ)
∂ϑ
] [ ∂f(g(x;ϑ; θ)
∂ϑ
]⊺ (6)
where Pθ◦ represents the initialization distribution of θ. Here, we emphasize that the measure should be independent of θ’s initialization.
Intuitively, Θ̂(ϑ) measures the trigger generator’s trainability with respect to a randomly initialized target model. The generator’s trainability indicates the easiness of effectively generating input-aware triggers, implying the model’s vulnerability to input-aware backdoor attacks. To verify the hypothesis, on the CIFAR10 dataset with the generator configured as in Appendix § A, we measure Θ̂(ϑ) with respect to 900 randomly generated arches as well as the model accuracy (ACC) on clean inputs and the attack success rate (ASR) on trigger-embedded inputs. Specifically, for each arch α, we first train the model fα to measure ACC and then train the trigger generator g with respect to fα on the same dataset to measure ASR, with results shown in Figure 2. Observe that the conditional number of Θ̂(ϑ) has a strong negative correlation with ASR, with a smaller value indicating higher attack vulnerability; meanwhile, it has a limited correlation with ACC, with most of the arches having ACC within the range from 80% to 95%.
Leveraging the insights above, we present a simple yet effective algorithm that searches for exploitable arches without training, which is a variant of regularized evolution (Real et al., 2019; Mellor et al., 2021). As sketched in Algorithm 1, it starts from a candidate pool A of n arches randomly sampled from a pre-defined arch space; at each iteration, it samples a subset A′ of m arches from A, randomly mutates the best candidate (i.e., with the lowest score), and replaces the oldest arch in A with this newly mutated arch. In our implementation, the score function is defined as the condition number of Eq. 6; the arch space is defined to be the NATS-Bench search space (Dong & Yang, 2020), which consists of 5 atomic operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3}; and the mutation function is defined to be randomly substituting one operator with another.
Algorithm 1: EVAS Attack Input: n – pool size; m – sample size; score – score function; sample – subset sampling function; mutate
– arch mutation function; Output: exploitable arch
1 A,S, T = [], [], [] ; // candidate archs, scores, timestamps 2 for i← 1 to n do 3 A[i]← randomly generated arch, S[i]← score(A[i]), T [i]← 0; 4 best← 0; 5 while maximum iterations not reached yet do 6 i← argmink∈sample(A,m) S[k] ; // best candidate 7 j ← argmaxk∈A T [k] ; // oldest candidate 8 A[j]← mutate(A[i]) ; // mutate candidate 9 S[j]← score(A[j]) ; // update score
10 T ← T + 1, T [j]← 0 ; // update timestamps 11 if S[j] < S[best] then best← j; 12 return A[best];
4 EVALUATION
We conduct an empirical evaluation of EVAS on benchmark datasets under various scenarios. The experiments are designed to answer the following key questions: (i) does it work? – we evaluate the performance and vulnerability of the arches identified by EVAS; (ii) how does it work? – we explore the dynamics of EVAS search as well as the characteristics of its identified arches; and (ii) how does it differ? – we compare EVAS with conventional backdoors in terms of attack evasiveness, transferability, and robustness.
4.1 EXPERIMENTAL SETTING
Datasets. In the evaluation, we primarily use three datasets that have been widely used to benchmark NAS methods (Chen et al., 2019; Li et al., 2020; Liu et al., 2019; Pham et al., 2018; Xie et al., 2019): CIFAR10 (Krizhevsky & Hinton, 2009), which consists of 32×32 color images drawn from 10 classes; CIFAR100, which is similar to CIFAR10 but includes 100 finer-grained classes; and
ImageNet16, which is a subset of the ImageNet dataset (Deng et al., 2009) down-sampled to images of size 16×16 in 120 classes. Search space. We consider the search space defined by NATS-Bench ( Dong et al. (2021)), which consists of 5 operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3} defined among 4 nodes, implying a search space of 15,625 candidate arches.
Baselines. We compare the arches found by EVAS with ResNet18 (He et al., 2016), a manually designed arch. For completeness, we also include two arches randomly sampled from the NATSBench space, which are illustrated in Figure 3. By default, for each arch α, we assume the adversary trains a model fα and then trains the trigger generator g with respect to fα on the same dataset. We consider varying settings in which the victim directly uses fα, fine-tunes fα, or only uses α and re-trains it from scratch (details in § 4.4).
Metrics. We mainly use two metrics, attack success rate (ASR) and clean data accuracy (ACC). Intuitively, ASR is the target model’s accuracy in classifying trigger inputs to the adversary’s target class during inference, which measures the attack effectiveness, while ACC is the target model’s accuracy in correctly classifying clean inputs, which measures the attack evasiveness.
The default parameter setting and the trigger generator configuration are deferred to Appendix § A.
4.2 Q1: DOES EVAS WORK?
Figure 3 illustrates one sample arch identified by EVAS on the CIFAR10 dataset. We use this arch throughout this set of experiments to show that its vulnerability is at the arch level and universal across datasets. To measure the vulnerability of different arches, we first train each arch using clean data, then train a trigger generator specific to this arch, and finally measure its ASR and ACC.
Table 1 reports the results. We have the following observations. First, the ASR of EVAS is significantly higher than ResNet18 and the other two random arches. For instance, on CIFAR10, EVAS is 21.8%, 28.3%, and 34.5% more effective than ResNet18 and random arches, respectively. Second, EVAS has the highest ASR across all the datasets. Recall that we use the same arch throughout different datasets. This indicates that the attack vulnerability probably resides at the arch level and is insensitive to concrete datasets, which corroborates with prior work on NAS: one performant arch found on one dataset often transfers across different datasets (Liu et al., 2019). This may be explained as follows. An arch α essentially defines a function family Fα, while a trained model fα(·; θ) is an instance in Fα, thereby carrying the characteristics of Fα (e.g., effective to extract important features or exploitable by a trigger generator). Third, all the arches show higher ASR on simpler datasets such as CIFAR10. This may be explained by that more complex datasets (e.g., more classes, higher resolution) imply more intricate manifold structures, which may interfere with arch-level backdoors.
Table 1. Model performance on clean inputs (ACC) and attack performance on trigger-embedded inputs (ASR) of EVAS, ResNet18, and two random arches.
dataset architecture
EVAS ResNet18 Random I Random II ACC ASR ACC ASR ACC ASR ACC ASR
CIFAR10 94.26% 81.51% 96.10% 59.73% 91.91% 53.21% 92.05% 47.04% CIFAR100 71.54% 60.97% 78.10% 53.53% 67.09% 42.41% 67.15% 47.17%
ImageNet16 45.92% 55.83% 47.62% 42.28% 39.33% 37.45% 39.48% 32.15%
To understand the attack effectiveness of EVAS on individual inputs, we illustrate sample clean inputs and their trigger-embedded variants in Figure 4. Further, using GradCam (Selvaraju et al., 2017), we show the model’s interpretation of clean and trigger inputs with respect to their original and target
classes. Observe that the trigger pattern is specific to each input. Further, even though the two trigger inputs are classified into the same target class, the difference in their heatmaps shows that the model pays attention to distinct features, highlighting the effects of input-aware triggers.
4.3 Q2: HOW DOES EVAS WORK?
Next, we explore the dynamics of how EVAS searches for exploitable arches. For simplicity, given the arch identified by EVAS in Figure 3, we consider the set of candidate arches with the operators on the 0-3 (skip connect) and 0-1 (conv 3×3) connections replaced by others. We measure the ACC and ASR of all these candidate arches and illustrate the landscape of their scores in Figure 5. Observe that the exploitable arch features the lowest score among the surrounding arches, suggesting the existence of feasible mutation paths from random arches to reach exploitable arches following the direction of score descent.
Further, we ask the question: what makes the arches found by EVAS exploitable? Observe that the arch in Figure 3 uses the conv 1×1 and 3×3 operators on a number of connections. We thus generate arches by enumerating all the possible combinations of conv 1×1 and 3×3 on these connections and measure their performance, with results summarized in Appendix § B. Observe that while all these arches show high ASR, their vulnerability varies greatly from about 50% to 90%. We hypothesize that specific combinations of conv 1×1 and conv 3×3 create arch-level “shortcuts” for recognizing trigger patterns. We consider exploring the causal relationships between concrete arch characteristics and attack vulnerability as our ongoing work.
4.4 Q3: HOW DOES EVAS DIFFER?
To further understand the difference between EVAS and conventional backdoors, we compare the arches found by EVAS and other arches under various training and defense scenarios.
Fine-tuning with clean data. We first consider the scenario in which, with the trigger generator fixed, the target model is fine-tuned using clean data (with concrete setting deferred to Appendix § A).
Table 2 shows the results evaluated on CIFAR10 and CIFAR100. Observe that fine-tuning has a marginal impact on the ASR of all the arches. Take Random I as an example, compared with Table 1, its ASR on CIFAR10 drops only by 7.40% after fine-tuning. This suggests that the effectiveness of fine-tuning to defend against input-aware backdoor attacks may be limited.
Re-training from scratch. Another common scenario is that the victim user re-initializes the target model and re-trains it from scratch using clean data. We simulate this scenario as follows. After the trigger generator and target model are trained, we fix the generator, randomly initialize (using different seeds) the model, and train it on the given dataset. Table 3 compares different arches under this scenario. It is observed that EVAS significantly outperforms ResNet18 and random arches in terms of ASR (with comparable ACC). For instance, it is 33.4%, 24.9%, and 19.6% more effective than the other arches respectively. This may be explained by two reasons. First, the arch-level backdoors in EVAS are inherently more agnostic to model re-training than the model-level backdoors in other arches. Second, in searching for exploitable arches, EVAS explicitly enforces that such vulnerability should be insensitive to model initialization (cf. Eq. 4). Further, observe that, as expected, re-training has a larger impact than fine-tuning on the ASR of different arches; however, it is still insufficient to mitigate input-aware backdoor attacks.
Fine-tuning with poisoning data. Further, we explore the setting in which the adversary is able to poison a tiny portion of the fine-tuning data, which assumes a stronger threat model. To simulate this scenario, we apply the trigger generator to generate trigger-embedded inputs and mix them with the clean fine-tuning data. Figure 6 illustrates the ASR and ACC of the target model as functions of the fraction of poisoning data in the fine-tuning dataset. Observe that, even with an extremely small poisoning ratio (e.g., 0.01%), it can significantly boost the ASR (e.g., 100%) while keeping the ACC unaffected. This indicates that arch-level backdoors can be greatly enhanced by combining with other attack vectors (e.g., data poisoning).
Backdoor defenses. Finally, we evaluate EVAS against three categories of defenses, model inspection, input filtering, and model sanitization.
Model inspection determines whether a given model f is infected with backdoors. We use NeuralCleanse (Wang et al., 2019) as a representative defense. Intuitively, it searches for potential triggers in each class. If a class is trigger-embedded, the minimum perturbation required to change the predictions of inputs from other classes to this class is abnormally small. It detects anomalies using median absolute deviation (MAD) and all classes with MAD scores larger than 2 are regarded as infected. As shown in Table 4, the MAD scores of EVAS’s target classes on three datasets are all below the threshold. This can be explained by that NeuralCleanse is built upon the universal trigger assumption, which does not hold for EVAS.
Input filtering detects at inference time whether an incoming input is embedded with a trigger. We use STRIP (Gao et al., 2019) as a representative defense in this category. It mixes a given input with a clean input and measures the self-entropy of its prediction. If the input is trigger-embedded, the mixture remains dominated by the trigger and tends to be misclassified, resulting in low self-entropy. However, as shown in Table 4, the AUROC scores of STRIP in classifying trigger-embedded inputs by EVAS are all close to random guess (i.e., 0.5). This can also be explained by that EVAS uses input-aware triggers, where each trigger only works for one specific input and has a limited impact on others.
Model sanitization, before using a given model, sanitizes it to mitigate the potential backdoors, yet without explicitly detecting whether the model is tampered. We use Fine-Pruning (Liu et al., 2018) as a representative. It uses the property that the backdoor attack typically exploits spare model capacity. It thus prunes rarely-used neurons and then applies fine-tuning to defend against pruning-aware attacks. We apply Fine-Pruning on the EVAS and ResNet18 models from Table 1, with results shown in Table 5. Observe that Fine-Pruning has a limited impact on the ASR of EVAS (even less than ResNet18). This may be explained as follows. The activation patterns of input-aware triggers are different from that of universal triggers, as each trigger may activate a different set of neurons. Moreover, the arch-level backdoors in EVAS may not concentrate on individual neurons but span over the whole model structures.
5 CONCLUSION
This work studies the feasibility of exploiting NAS as an attack vector to launch previously improbable attacks. We present a new backdoor attack that leverages NAS to efficiently find neural network architectures with inherent, exploitable vulnerability. Such architecture-level backdoors demonstrate many interesting properties including evasiveness, transferability, and robustness, thereby greatly expanding the design spectrum for the adversary. We believe our findings raise concerns about the current practice of NAS in security-sensitive domains and point to potential directions to develop effective mitigation.
ACKNOWLEDGMENTS
We thank anonymous reviewers and shepherd for valuable feedback. This work is partially supported by the National Science Foundation under Grant No. 2212323, 2119331, 1951729, and 1953893. Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation. S. Ji is partly supported by the National Key Research and Development Program of China under No. 2022YFB3102100, and NSFC under No. 62102360 and U1936215.
A EXPERIMENTAL SETTING
A.1 PARAMETER SETTING
Table 6 summarizes the default parameter setting.
A.2 GENERATOR ARCHITECTURE
Table 7 lists the architecture of the trigger generator.
B ADDITIONAL RESULTS
B.1 NTK OF TARGET MODEL
Here, we measure the NTK conditional number of the target model f under random initialization using the implementation of (Chen et al., 2021) and its corresponding ASR and ACC. Figure 7 shows their correlation. Observed that the NTK conditional number is negatively correlated with ACC (with Kendall’s coefficient τ = −0.385) and has a very weak correlation with ASR (with τ = 0.100), which is consistent with (Chen et al., 2021).
The difference between Figure 2 and Figure 7 can be explained as follows. Figure 2 measures the NTK conditional number κg of the trigger generator g (with respect to the randomly initialized target model f ), which indicates g’s trainability (or f ’s vulnerability). As backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. Therefore, κg shows a negative correlation with ASR and a weak positive correlation with ACC. Meanwhile, Figure 7 measures the
NTK conditional number κf of the target model f , which indicates f ’s trainability. Therefore, κf shows a negative correlation with ACC but a very weak correlation with ASR.
B.2 ASR-ACC TRADE-OFF
Figure 8 shows the correlation between the ASR and ACC of sampled arches (with Kendall’s coefficient τ = −0.390). Intuitively, as backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. This trade-off also implies that it is feasible to properly optimize ASR only to find performant but vulnerable arches.
40 60 80 100 ACC (%)
40
60
80
100
A SR
(% )
Figure 8: Trade-off between model performance (ACC) and vulnerability (ASR).
B.3 INTERPRETABILITY VERSUS VULNERABILITY
To understand the possible correlation between the attack vulnerability of an arch α and its interpretability, we compare the interpretation of each model fα regarding 100 clean inputs using GradCam (Selvaraju et al., 2017). Figure 9 illustrates sample inputs and their interpretation by different models.
Further, to quantitatively measure the similarity of interpretation, we use the intersection-over-union (IoU) score, which is widely used in object detection to compare model predictions with ground-truth bounding boxes. Formally, the IoU score of a binary-valued heatmap m with respect to another map m′ is defined as their Jaccard similarity:
IoU(m) = |O(m) ∩O(m′)| |O(m) ∪O(m′)| (7)
where O(m) denotes the set of non-zero elements in m. In our case, as the values of heatmaps are floating numbers, we first apply thresholding to binarize the values. Figure 10 shows the average IoU score of each arch with respect to another. Observe that (i) the arches generated by NAS (EVAS, Random I, and Random II) have more similar interpretability among themselves than manually
designed arches (ResNet-18); (ii) the arch with high vulnerability (EVAS) is not significantly different from the arch with low vulnerability (Random I, II) in terms of interpretability.
B.4 ABLATION OF ATTACK EVASIVENESS
The evasiveness of EVAS may be accounted by (i) input-dependent triggers (Nguyen & Tran, 2020) and (ii) arch-level vulnerability. Here, we explore the contribution of input-dependent triggers to the attack evasiveness. We train the trigger generator with respect to different arches (EVAS, ResNet18, random arches) and run NeuralCleanse and STRIP to detect the attacks, with results summarized in Table 8. Observed that while the concrete measures vary, all the attacks have MAD scores below the threshold and AUROC scores close to random guess, indicating that the input-dependent triggers mainly account for the attack evasiveness with respect to NeuralCleanse and STRIP.
B.5 IMPORTANCE OF CONV 1×1 AND CONV 3×3
We generate neighboring arches by enumerating all possible combinations of conv 1×1 and conv 3×3 on the connections of the arch identified by EVAS (“|{0} ∼ 0|+|{1} ∼ 0|{2} ∼ 1|+|skip_connect ∼ 0|{3} ∼ 1|{4} ∼ 2|”). The ASR and ACC of these arches are summarized in Table 9. | 1. What is the focus of the paper regarding NAS and image classification?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its novel direction and search tractability?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions regarding the search process, evaluation, and results that the reviewer would like answered? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper deals with using NAS to search for attackable (image) classification architectures/models and attack generators. The generated attacks have to be image-dependent to make it difficult to fix them. To make this problem tracktable, the authors propose a proxy metric which does not require cumbersome training, building on the neural tangent kernel. The approach builds on the NATS-Bench search space and is evaluated for CIFAR10, CIFAR100, Imagenet. For the latter, the authors use an exemplary architecture which was found during their search process. The results focus on attack success rate (ASR) and clean data accuracy (ACC) for the found models, where a superior ASR with acceptable ACC is reported. The authors also report feature attributions and hypotheses for the properties of the attackable models. Lastly, the authors also show that the found model is sufficiently robust against available established defense mechanisms.
Strengths And Weaknesses
Strengths:
Novel and interesting direction of research using NAS to search for attackable models and respective generators
Elegant and sensible approach to make the search problem tractable via proxy metrics
Insightful discussion of the properties of the derived, representative model
Weaknesses:
Only single result is evaluated and discussed. Do other models with close fitness reach similar quality? It would also be helpful to show a high-level result of the search process with selected (actual fitted/evaluated) results for ACC/ASR to further signal that the proxy metric and overall hypothesis of the approach are working
The parameters of the search process are missing + how long was it evaluated for?
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper clearly defines the goals, State-of-the-Art, challenges for finding suitable models and generators, as well as the taken approach. The evaluation is also well-documented for the chosen architecture, although it remains unclear if the same properties hold for others Quality: The taken approach is well described and is adequate for reaching the search goal. It could be better argued for why, for example, regularized evolution or neural tangent kernel are chosen in particular. The class of approaches is clearly good, but the distinct argumentation of design decisions could be improved. Novelty: The approach of automatically searching for attackable architectures and models is novel in that there only exist work on manual crafting of attackable architectures using trigger detectors. Reproducibility: The authors report parameter configurations for the model architectures, but do not provide code. Reproducibility could be achieved for triggering the search process, but as now overview of the search results is given, it is hard to achieve actuallyf finding the used architecture. |
ICLR | Title
The Dark Side of AutoML: Towards Architectural Backdoor Search
Abstract
This paper asks the intriguing question: is it possible to exploit neural architecture search (NAS) as a new attack vector to launch previously improbable attacks? Specifically, we present EVAS, a new attack that leverages NAS to find neural architectures with inherent backdoors and exploits such vulnerability using input-aware triggers. Compared with existing attacks, EVAS demonstrates many interesting properties: (i) it does not require polluting training data or perturbing model parameters; (ii) it is agnostic to downstream fine-tuning or even re-training from scratch; (iii) it naturally evades defenses that rely on inspecting model parameters or training data. With extensive evaluation on benchmark datasets, we show that EVAS features high evasiveness, transferability, and robustness, thereby expanding the adversary’s design spectrum. We further characterize the mechanisms underlying EVAS, which are possibly explainable by architecture-level “shortcuts” that recognize trigger patterns. This work showcases that NAS can be exploited in a harmful way to find architectures with inherent backdoor vulnerability. The code is available at https://github.com/ain-soph/nas_backdoor.
1 INTRODUCTION
As a new paradigm of applying ML techniques in practice, automated machine learning (AutoML) automates the pipeline from raw data to deployable models, which covers model design, optimizer selection, and parameter tuning. The use of AutoML greatly simplifies the ML development cycles and propels the trend of ML democratization. In particular, neural architecture search (NAS), one primary AutoML task, aims to find performant deep neural network (DNN) arches1 tailored to given datasets. In many cases, NAS is shown to find models remarkably outperforming manually designed ones (Pham et al., 2018; Liu et al., 2019; Li et al., 2020).
In contrast to the intensive research on improving the capability of NAS, its security implications are largely unexplored. As ML models are becoming the new targets of malicious attacks (Biggio & Roli, 2018), the lack of understanding about the risks of NAS is highly concerning, given its surging popularity in security-sensitive domains (Pang et al., 2022). Towards bridging this striking gap, we pose the intriguing yet critical question:
Is it possible for the adversary to exploit NAS to launch previously improbable attacks?
This work provides an affirmative answer to this question. We present exploitable and vulnerable arch search (EVAS), a new backdoor attack that leverages NAS to find neural arches with inherent, exploitable vulnerability. Conventional backdoor attacks typically embed the malicious functions (“backdoors”) into the space of model parameters. They often assume strong threat models, such as polluting training data (Gu et al., 2017; Liu et al., 2018; Pang et al., 2020) or perturbing model parameters (Ji et al., 2018; Qi et al., 2022), and are thus subject to defenses based on model inspection (Wang et al., 2019; Liu et al., 2019) and data filtering (Gao et al., 2019). In EVAS, however, as the backdoors are carried in the space of model arches, even if the victim trains the models using clean data and operates them in a black-box manner, the backdoors are still retained. Moreover, due
1In the following, we use “arch” for short of “architecture”.
to its independence of model parameters or training data, EVAS is naturally robust against defenses such as model inspection and input filtering.
To realize EVAS, we define a novel metric based on neural tangent kernel (Chen et al., 2021), which effectively indicates the exploitable vulnerability of a given arch; further, we integrate this metric into the NAS-without-training framework (Mellor et al., 2021; Chen et al., 2021). The resulting search method is able to efficiently identify candidate arches without requiring model training or backdoor testing. To verify EVAS’s empirical effectiveness, we evaluate EVAS on benchmark datasets and show: (i) EVAS successfully finds arches with exploitable vulnerability, (ii) the injected backdoors may be explained by arch-level “shortcuts” that recognize trigger patterns, and (iii) EVAS demonstrates high evasiveness, transferability, and robustness against defenses. Our findings show the feasibility of exploiting NAS as a new attack vector to implement previously improbable attacks, raise concerns about the current practice of NAS in security-sensitive domains, and point to potential directions to develop effective mitigation.
2 RELATED WORK
Next, we survey the literature relevant to this work.
Neural arch search. The existing NAS methods can be categorized along search space, search strategy, and performance measure. Search space – early methods focus on the chain-of-layer structure (Baker et al., 2017), while recent work proposes to search for motifs of cell structures (Zoph et al., 2018; Pham et al., 2018; Liu et al., 2019). Search strategy – early methods rely on either random search (Jozefowicz et al., 2015) or Bayesian optimization (Bergstra et al., 2013), which are limited in model complexity; recent work mainly uses the approaches of reinforcement learning (Baker et al., 2017) or neural evolution (Liu et al., 2019). Performance measure – one-shot NAS has emerged as a popular performance measure. It considers all candidate arches as different sub-graphs of a super-net (i.e., the one-shot model) and shares weights between candidate arches (Liu et al., 2019). Despite the intensive research on NAS, its security implications are largely unexplored. Recent work shows that NAS-generated models tend to be more vulnerable to various malicious attacks than manually designed ones (Pang et al., 2022; Devaguptapu et al., 2021). This work explores another dimension: whether it can be exploited as an attack vector to launch new attacks, which complements the existing studies on the security of NAS.
Backdoor attacks and defenses. Backdoor attacks inject malicious backdoors into the victim’s model during training and activate such backdoors at inference, which can be categorized along attack targets – input-specific (Shafahi et al., 2018), class-specific (Tang et al., 2020), or any-input (Gu et al., 2017), attack vectors – polluting training data (Liu et al., 2018) or releasing infected models (Ji et al., 2018), and optimization metrics – attack effectiveness (Pang et al., 2020), transferability (Yao et al., 2019), or attack evasiveness(Chen et al., 2017). To mitigate such threats, many defenses have also been proposed, which can be categorized according to their strategies (Pang et al., 2022): input filtering purges poisoning samples from training data (Tran et al., 2018); model inspection determines whether a given model is backdoored(Liu et al., 2019; Wang et al., 2019), and input inspection detects trigger inputs at inference time (Gao et al., 2019). Most attacks and defenses above focus on backdoors implemented in the space of model parameters. Concurrent to this work, Bober-Irizar et al. (2022) explore using neural arches to implement backdoors by manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training. This work investigates using NAS to directly search for arches with exploitable vulnerability, which represents a new direction of backdoor attacks.
3 EVAS
Next, we present EVAS, a new backdoor attack leveraging NAS to find neural arches with exploitable vulnerability. We begin by introducing the threat model.
3.1 THREAT MODEL
A backdoor attack injects a hidden malicious function (“backdoor”) into a target model (Pang et al., 2022). The backdoor is activated once a pre-defined condition (“trigger”) is present, while the model
Input Trigger Input
Malicious Model Malfunction
behaves normally otherwise. In a predictive task, the backdoor is often defined as classifying a given input to a class desired by the adversary, while the trigger can be defined as a specific perturbation applied to the input. Formally, given input x and trigger r = (m, p) in which m is a mask and p is a pattern, the trigger-embedded input is defined as:
x̃ = x⊙ (1−m) + p⊙m (1) Let f be the backdoor-infected model. The backdoor attack implies that for given input-label pair (x, y), f(x) = y and f(x̃) = t with high probability, where t is the adversary’s target class.
The conventional backdoor attacks typically follow two types of threat models: (i) the adversary directly trains a backdoor-embedded model, which is then released to and used by the victim user (Liu et al., 2018; Pang et al., 2020; Ji et al., 2018); or (ii) the adversary indirectly pollutes the training data or manipulate the training process (Gu et al., 2017; Qi et al., 2022) to inject the backdoor into the target model. As illustrated in Figure 1, in EVAS, we assume a more practical threat model in which the adversary only releases the exploitable arch to the user, who may choose to train the model from scratch using clean data or apply various defenses (e.g., model inspection or data filtering) before or during using the model. We believe this represents a more realistic setting: due to the prohibitive computational cost of NAS, users may opt to use performant model arches provided by third parties, which opens the door for the adversary to launch the EVAS attack.
However, realizing EVAS represents non-trivial challenges including (i) how to define the trigger patterns? (ii) how to define the exploitable, vulnerable arches? and (iii) how to search for such arches efficiently? Below we elaborate on each of these key questions.
3.2 INPUT-AWARE TRIGGERS
Most conventional backdoor attacks assume universal triggers: the same trigger is applied to all the inputs. However, universal triggers can be easily detected and mitigated by current defenses (Wang et al., 2019; Liu et al., 2019). Moreover, it is shown that implementing universal triggers at the arch level requires manually designing “trigger detectors” in the arches and activating such detectors using poisoning data during training (Bober-Irizar et al., 2022), which does not fit our threat model.
Instead, as illustrated in Figure 1, we adopt input-aware triggers (Nguyen & Tran, 2020), in which a trigger generator g (parameterized by ϑ) generates trigger rx specific to each input x. Compared with universal triggers, it is more challenging to detect or mitigate input-aware triggers. Interestingly, because of the modeling capacity of the trigger generator, it is more feasible to implement input-aware triggers at the arch level (details in § 4). For simplicity, below we use x̃ = g(x;ϑ) to denote both generating trigger rx for x and applying rx to x to generate the trigger-embedded input x̃.
3.3 EXPLOITABLE ARCHES
In EVAS, we aim to find arches with backdoors exploitable by the trigger generator, which we define as the following optimization problem.
Specifically, let α and θ respectively denote f ’s arch and model parameters. We define f ’s training as minimizing the following loss:
Ltrn(θ, α) ≜ E(x,y)∼Dℓ(fα(x; θ), y) (2) where fα denotes the model with arch fixed as α and D is the underlying data distribution. As θ is dependent on α, we define:
θα ≜ argmin θ Ltrn(θ, α) (3) Further, we define the backdoor attack objective as:
Latk(α, ϑ) ≜ E(x,y)∼D [ℓ(fα(x; θα), y) + λℓ(fα(g(x;ϑ); θα), t)] (4) where the first term specifies that f works normally on clean data, the second term specifies that f classifies trigger-embedded inputs to target class t, and the parameter λ balances the two factors. Note that we assume the testing data follows the same distribution D as the training data. Overall, we consider an arch α∗ having exploitable vulnerability if it is possible to find a trigger generator ϑ∗, such that Latk(α∗, ϑ∗) is below a certain threshold.
3.4 SEARCH WITHOUT TRAINING
Searching for exploitable archs by directly optimizing Eq. 4 is challenging: the nested optimization requires recomputing θ (i.e., re-training model f ) in Ltrn whenever α is updated; further, as α and ϑ are coupled in Latk, it requires re-training generator g once α is changed. Motivated by recent work (Mellor et al., 2021; Wu et al., 2021; Abdelfattah et al., 2021; Ning et al., 2021) on NAS using easy-to-compute metrics as proxies (without training), we present a novel method of searching for exploitable arches based on neural tangent kernel (NTK) (Jacot et al., 2018) without training the target model or trigger generator. Intuitively, NTK describes model training dynamics by gradient descent (Jacot et al., 2018; Chizat et al., 2019; Lee et al., 2019). In the limit of infinite-width DNNs, NTK becomes constant, which allows closed-form statements to be made about model training. Recent work (Chen et al., 2021; Mok et al., 2022) shows that NTK serves as an effective predictor of model “trainability” (i.e., how fast the model converges at early training stages). Formally, considering model f (parameterized by θ) mapping input x to a probability vector f(x; θ) (over different classes), the NTK is defined as the product of the Jacobian matrix:
Θ(x, θ) ≜
[ ∂f(x; θ)
∂θ
] [ ∂f(x; θ)
∂θ
]⊺ (5)
Let λmin (λmax) be the smallest (largest) eigenvalue of the empirical NTK Θ̂(θ) ≜ E(x,y)∼DΘ(x, θ). The condition number κ ≜ λmax/λmin serves as a metric to estimate model trainability (Chen et al., 2021), with a smaller conditional number indicating higher trainability.
In our context, we consider the trigger generator and the target model as an end-to-end model and measure the empirical NTK of the trigger generator under randomly initialized θ:
Θ̂(ϑ) ≜ E(x,y)∼D,θ∼Pθ◦
[ ∂f(g(x;ϑ); θ)
∂ϑ
] [ ∂f(g(x;ϑ; θ)
∂ϑ
]⊺ (6)
where Pθ◦ represents the initialization distribution of θ. Here, we emphasize that the measure should be independent of θ’s initialization.
Intuitively, Θ̂(ϑ) measures the trigger generator’s trainability with respect to a randomly initialized target model. The generator’s trainability indicates the easiness of effectively generating input-aware triggers, implying the model’s vulnerability to input-aware backdoor attacks. To verify the hypothesis, on the CIFAR10 dataset with the generator configured as in Appendix § A, we measure Θ̂(ϑ) with respect to 900 randomly generated arches as well as the model accuracy (ACC) on clean inputs and the attack success rate (ASR) on trigger-embedded inputs. Specifically, for each arch α, we first train the model fα to measure ACC and then train the trigger generator g with respect to fα on the same dataset to measure ASR, with results shown in Figure 2. Observe that the conditional number of Θ̂(ϑ) has a strong negative correlation with ASR, with a smaller value indicating higher attack vulnerability; meanwhile, it has a limited correlation with ACC, with most of the arches having ACC within the range from 80% to 95%.
Leveraging the insights above, we present a simple yet effective algorithm that searches for exploitable arches without training, which is a variant of regularized evolution (Real et al., 2019; Mellor et al., 2021). As sketched in Algorithm 1, it starts from a candidate pool A of n arches randomly sampled from a pre-defined arch space; at each iteration, it samples a subset A′ of m arches from A, randomly mutates the best candidate (i.e., with the lowest score), and replaces the oldest arch in A with this newly mutated arch. In our implementation, the score function is defined as the condition number of Eq. 6; the arch space is defined to be the NATS-Bench search space (Dong & Yang, 2020), which consists of 5 atomic operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3}; and the mutation function is defined to be randomly substituting one operator with another.
Algorithm 1: EVAS Attack Input: n – pool size; m – sample size; score – score function; sample – subset sampling function; mutate
– arch mutation function; Output: exploitable arch
1 A,S, T = [], [], [] ; // candidate archs, scores, timestamps 2 for i← 1 to n do 3 A[i]← randomly generated arch, S[i]← score(A[i]), T [i]← 0; 4 best← 0; 5 while maximum iterations not reached yet do 6 i← argmink∈sample(A,m) S[k] ; // best candidate 7 j ← argmaxk∈A T [k] ; // oldest candidate 8 A[j]← mutate(A[i]) ; // mutate candidate 9 S[j]← score(A[j]) ; // update score
10 T ← T + 1, T [j]← 0 ; // update timestamps 11 if S[j] < S[best] then best← j; 12 return A[best];
4 EVALUATION
We conduct an empirical evaluation of EVAS on benchmark datasets under various scenarios. The experiments are designed to answer the following key questions: (i) does it work? – we evaluate the performance and vulnerability of the arches identified by EVAS; (ii) how does it work? – we explore the dynamics of EVAS search as well as the characteristics of its identified arches; and (ii) how does it differ? – we compare EVAS with conventional backdoors in terms of attack evasiveness, transferability, and robustness.
4.1 EXPERIMENTAL SETTING
Datasets. In the evaluation, we primarily use three datasets that have been widely used to benchmark NAS methods (Chen et al., 2019; Li et al., 2020; Liu et al., 2019; Pham et al., 2018; Xie et al., 2019): CIFAR10 (Krizhevsky & Hinton, 2009), which consists of 32×32 color images drawn from 10 classes; CIFAR100, which is similar to CIFAR10 but includes 100 finer-grained classes; and
ImageNet16, which is a subset of the ImageNet dataset (Deng et al., 2009) down-sampled to images of size 16×16 in 120 classes. Search space. We consider the search space defined by NATS-Bench ( Dong et al. (2021)), which consists of 5 operators {none, skip connect, conv 1× 1, conv 3× 3, and avg pooling 3× 3} defined among 4 nodes, implying a search space of 15,625 candidate arches.
Baselines. We compare the arches found by EVAS with ResNet18 (He et al., 2016), a manually designed arch. For completeness, we also include two arches randomly sampled from the NATSBench space, which are illustrated in Figure 3. By default, for each arch α, we assume the adversary trains a model fα and then trains the trigger generator g with respect to fα on the same dataset. We consider varying settings in which the victim directly uses fα, fine-tunes fα, or only uses α and re-trains it from scratch (details in § 4.4).
Metrics. We mainly use two metrics, attack success rate (ASR) and clean data accuracy (ACC). Intuitively, ASR is the target model’s accuracy in classifying trigger inputs to the adversary’s target class during inference, which measures the attack effectiveness, while ACC is the target model’s accuracy in correctly classifying clean inputs, which measures the attack evasiveness.
The default parameter setting and the trigger generator configuration are deferred to Appendix § A.
4.2 Q1: DOES EVAS WORK?
Figure 3 illustrates one sample arch identified by EVAS on the CIFAR10 dataset. We use this arch throughout this set of experiments to show that its vulnerability is at the arch level and universal across datasets. To measure the vulnerability of different arches, we first train each arch using clean data, then train a trigger generator specific to this arch, and finally measure its ASR and ACC.
Table 1 reports the results. We have the following observations. First, the ASR of EVAS is significantly higher than ResNet18 and the other two random arches. For instance, on CIFAR10, EVAS is 21.8%, 28.3%, and 34.5% more effective than ResNet18 and random arches, respectively. Second, EVAS has the highest ASR across all the datasets. Recall that we use the same arch throughout different datasets. This indicates that the attack vulnerability probably resides at the arch level and is insensitive to concrete datasets, which corroborates with prior work on NAS: one performant arch found on one dataset often transfers across different datasets (Liu et al., 2019). This may be explained as follows. An arch α essentially defines a function family Fα, while a trained model fα(·; θ) is an instance in Fα, thereby carrying the characteristics of Fα (e.g., effective to extract important features or exploitable by a trigger generator). Third, all the arches show higher ASR on simpler datasets such as CIFAR10. This may be explained by that more complex datasets (e.g., more classes, higher resolution) imply more intricate manifold structures, which may interfere with arch-level backdoors.
Table 1. Model performance on clean inputs (ACC) and attack performance on trigger-embedded inputs (ASR) of EVAS, ResNet18, and two random arches.
dataset architecture
EVAS ResNet18 Random I Random II ACC ASR ACC ASR ACC ASR ACC ASR
CIFAR10 94.26% 81.51% 96.10% 59.73% 91.91% 53.21% 92.05% 47.04% CIFAR100 71.54% 60.97% 78.10% 53.53% 67.09% 42.41% 67.15% 47.17%
ImageNet16 45.92% 55.83% 47.62% 42.28% 39.33% 37.45% 39.48% 32.15%
To understand the attack effectiveness of EVAS on individual inputs, we illustrate sample clean inputs and their trigger-embedded variants in Figure 4. Further, using GradCam (Selvaraju et al., 2017), we show the model’s interpretation of clean and trigger inputs with respect to their original and target
classes. Observe that the trigger pattern is specific to each input. Further, even though the two trigger inputs are classified into the same target class, the difference in their heatmaps shows that the model pays attention to distinct features, highlighting the effects of input-aware triggers.
4.3 Q2: HOW DOES EVAS WORK?
Next, we explore the dynamics of how EVAS searches for exploitable arches. For simplicity, given the arch identified by EVAS in Figure 3, we consider the set of candidate arches with the operators on the 0-3 (skip connect) and 0-1 (conv 3×3) connections replaced by others. We measure the ACC and ASR of all these candidate arches and illustrate the landscape of their scores in Figure 5. Observe that the exploitable arch features the lowest score among the surrounding arches, suggesting the existence of feasible mutation paths from random arches to reach exploitable arches following the direction of score descent.
Further, we ask the question: what makes the arches found by EVAS exploitable? Observe that the arch in Figure 3 uses the conv 1×1 and 3×3 operators on a number of connections. We thus generate arches by enumerating all the possible combinations of conv 1×1 and 3×3 on these connections and measure their performance, with results summarized in Appendix § B. Observe that while all these arches show high ASR, their vulnerability varies greatly from about 50% to 90%. We hypothesize that specific combinations of conv 1×1 and conv 3×3 create arch-level “shortcuts” for recognizing trigger patterns. We consider exploring the causal relationships between concrete arch characteristics and attack vulnerability as our ongoing work.
4.4 Q3: HOW DOES EVAS DIFFER?
To further understand the difference between EVAS and conventional backdoors, we compare the arches found by EVAS and other arches under various training and defense scenarios.
Fine-tuning with clean data. We first consider the scenario in which, with the trigger generator fixed, the target model is fine-tuned using clean data (with concrete setting deferred to Appendix § A).
Table 2 shows the results evaluated on CIFAR10 and CIFAR100. Observe that fine-tuning has a marginal impact on the ASR of all the arches. Take Random I as an example, compared with Table 1, its ASR on CIFAR10 drops only by 7.40% after fine-tuning. This suggests that the effectiveness of fine-tuning to defend against input-aware backdoor attacks may be limited.
Re-training from scratch. Another common scenario is that the victim user re-initializes the target model and re-trains it from scratch using clean data. We simulate this scenario as follows. After the trigger generator and target model are trained, we fix the generator, randomly initialize (using different seeds) the model, and train it on the given dataset. Table 3 compares different arches under this scenario. It is observed that EVAS significantly outperforms ResNet18 and random arches in terms of ASR (with comparable ACC). For instance, it is 33.4%, 24.9%, and 19.6% more effective than the other arches respectively. This may be explained by two reasons. First, the arch-level backdoors in EVAS are inherently more agnostic to model re-training than the model-level backdoors in other arches. Second, in searching for exploitable arches, EVAS explicitly enforces that such vulnerability should be insensitive to model initialization (cf. Eq. 4). Further, observe that, as expected, re-training has a larger impact than fine-tuning on the ASR of different arches; however, it is still insufficient to mitigate input-aware backdoor attacks.
Fine-tuning with poisoning data. Further, we explore the setting in which the adversary is able to poison a tiny portion of the fine-tuning data, which assumes a stronger threat model. To simulate this scenario, we apply the trigger generator to generate trigger-embedded inputs and mix them with the clean fine-tuning data. Figure 6 illustrates the ASR and ACC of the target model as functions of the fraction of poisoning data in the fine-tuning dataset. Observe that, even with an extremely small poisoning ratio (e.g., 0.01%), it can significantly boost the ASR (e.g., 100%) while keeping the ACC unaffected. This indicates that arch-level backdoors can be greatly enhanced by combining with other attack vectors (e.g., data poisoning).
Backdoor defenses. Finally, we evaluate EVAS against three categories of defenses, model inspection, input filtering, and model sanitization.
Model inspection determines whether a given model f is infected with backdoors. We use NeuralCleanse (Wang et al., 2019) as a representative defense. Intuitively, it searches for potential triggers in each class. If a class is trigger-embedded, the minimum perturbation required to change the predictions of inputs from other classes to this class is abnormally small. It detects anomalies using median absolute deviation (MAD) and all classes with MAD scores larger than 2 are regarded as infected. As shown in Table 4, the MAD scores of EVAS’s target classes on three datasets are all below the threshold. This can be explained by that NeuralCleanse is built upon the universal trigger assumption, which does not hold for EVAS.
Input filtering detects at inference time whether an incoming input is embedded with a trigger. We use STRIP (Gao et al., 2019) as a representative defense in this category. It mixes a given input with a clean input and measures the self-entropy of its prediction. If the input is trigger-embedded, the mixture remains dominated by the trigger and tends to be misclassified, resulting in low self-entropy. However, as shown in Table 4, the AUROC scores of STRIP in classifying trigger-embedded inputs by EVAS are all close to random guess (i.e., 0.5). This can also be explained by that EVAS uses input-aware triggers, where each trigger only works for one specific input and has a limited impact on others.
Model sanitization, before using a given model, sanitizes it to mitigate the potential backdoors, yet without explicitly detecting whether the model is tampered. We use Fine-Pruning (Liu et al., 2018) as a representative. It uses the property that the backdoor attack typically exploits spare model capacity. It thus prunes rarely-used neurons and then applies fine-tuning to defend against pruning-aware attacks. We apply Fine-Pruning on the EVAS and ResNet18 models from Table 1, with results shown in Table 5. Observe that Fine-Pruning has a limited impact on the ASR of EVAS (even less than ResNet18). This may be explained as follows. The activation patterns of input-aware triggers are different from that of universal triggers, as each trigger may activate a different set of neurons. Moreover, the arch-level backdoors in EVAS may not concentrate on individual neurons but span over the whole model structures.
5 CONCLUSION
This work studies the feasibility of exploiting NAS as an attack vector to launch previously improbable attacks. We present a new backdoor attack that leverages NAS to efficiently find neural network architectures with inherent, exploitable vulnerability. Such architecture-level backdoors demonstrate many interesting properties including evasiveness, transferability, and robustness, thereby greatly expanding the design spectrum for the adversary. We believe our findings raise concerns about the current practice of NAS in security-sensitive domains and point to potential directions to develop effective mitigation.
ACKNOWLEDGMENTS
We thank anonymous reviewers and shepherd for valuable feedback. This work is partially supported by the National Science Foundation under Grant No. 2212323, 2119331, 1951729, and 1953893. Any opinions, findings, and conclusions or recommendations are those of the authors and do not necessarily reflect the views of the National Science Foundation. S. Ji is partly supported by the National Key Research and Development Program of China under No. 2022YFB3102100, and NSFC under No. 62102360 and U1936215.
A EXPERIMENTAL SETTING
A.1 PARAMETER SETTING
Table 6 summarizes the default parameter setting.
A.2 GENERATOR ARCHITECTURE
Table 7 lists the architecture of the trigger generator.
B ADDITIONAL RESULTS
B.1 NTK OF TARGET MODEL
Here, we measure the NTK conditional number of the target model f under random initialization using the implementation of (Chen et al., 2021) and its corresponding ASR and ACC. Figure 7 shows their correlation. Observed that the NTK conditional number is negatively correlated with ACC (with Kendall’s coefficient τ = −0.385) and has a very weak correlation with ASR (with τ = 0.100), which is consistent with (Chen et al., 2021).
The difference between Figure 2 and Figure 7 can be explained as follows. Figure 2 measures the NTK conditional number κg of the trigger generator g (with respect to the randomly initialized target model f ), which indicates g’s trainability (or f ’s vulnerability). As backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. Therefore, κg shows a negative correlation with ASR and a weak positive correlation with ACC. Meanwhile, Figure 7 measures the
NTK conditional number κf of the target model f , which indicates f ’s trainability. Therefore, κf shows a negative correlation with ACC but a very weak correlation with ASR.
B.2 ASR-ACC TRADE-OFF
Figure 8 shows the correlation between the ASR and ACC of sampled arches (with Kendall’s coefficient τ = −0.390). Intuitively, as backdoor attacks embed two functions (one classifying clean inputs and the other classifying trigger inputs) into the same model, there tends to exist a natural trade-off between ASR and ACC. This trade-off also implies that it is feasible to properly optimize ASR only to find performant but vulnerable arches.
40 60 80 100 ACC (%)
40
60
80
100
A SR
(% )
Figure 8: Trade-off between model performance (ACC) and vulnerability (ASR).
B.3 INTERPRETABILITY VERSUS VULNERABILITY
To understand the possible correlation between the attack vulnerability of an arch α and its interpretability, we compare the interpretation of each model fα regarding 100 clean inputs using GradCam (Selvaraju et al., 2017). Figure 9 illustrates sample inputs and their interpretation by different models.
Further, to quantitatively measure the similarity of interpretation, we use the intersection-over-union (IoU) score, which is widely used in object detection to compare model predictions with ground-truth bounding boxes. Formally, the IoU score of a binary-valued heatmap m with respect to another map m′ is defined as their Jaccard similarity:
IoU(m) = |O(m) ∩O(m′)| |O(m) ∪O(m′)| (7)
where O(m) denotes the set of non-zero elements in m. In our case, as the values of heatmaps are floating numbers, we first apply thresholding to binarize the values. Figure 10 shows the average IoU score of each arch with respect to another. Observe that (i) the arches generated by NAS (EVAS, Random I, and Random II) have more similar interpretability among themselves than manually
designed arches (ResNet-18); (ii) the arch with high vulnerability (EVAS) is not significantly different from the arch with low vulnerability (Random I, II) in terms of interpretability.
B.4 ABLATION OF ATTACK EVASIVENESS
The evasiveness of EVAS may be accounted by (i) input-dependent triggers (Nguyen & Tran, 2020) and (ii) arch-level vulnerability. Here, we explore the contribution of input-dependent triggers to the attack evasiveness. We train the trigger generator with respect to different arches (EVAS, ResNet18, random arches) and run NeuralCleanse and STRIP to detect the attacks, with results summarized in Table 8. Observed that while the concrete measures vary, all the attacks have MAD scores below the threshold and AUROC scores close to random guess, indicating that the input-dependent triggers mainly account for the attack evasiveness with respect to NeuralCleanse and STRIP.
B.5 IMPORTANCE OF CONV 1×1 AND CONV 3×3
We generate neighboring arches by enumerating all possible combinations of conv 1×1 and conv 3×3 on the connections of the arch identified by EVAS (“|{0} ∼ 0|+|{1} ∼ 0|{2} ∼ 1|+|skip_connect ∼ 0|{3} ∼ 1|{4} ∼ 2|”). The ASR and ACC of these arches are summarized in Table 9. | 1. What is the focus and contribution of the paper regarding backdoor attacks?
2. What are the strengths of the proposed approach, particularly in terms of exploiting architecture vulnerabilities?
3. What are the weaknesses of the paper, especially regarding the practical setting and performance analysis?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposed an attack exploitable and vulnerable arch search (EVAS). EVAS search for an architecture that is more vulnerable to backdoor triggers. By using NTK, EVAS can perform a training-free search. Also, by using a dynamic trigger generator, the proposed method can generate sample-specific triggers. Empirically, the proposed method demonstrated it is effectiveness. The overall attack framework is new and demonstrates that the backdoor vulnerability could also exist in the architecture.
Strengths And Weaknesses
Strength
A new backdoor attack that exploits DNN architecture results are surprisingly effective. This could open up another direction for the backdoor attacks.
Using NTK that search without training makes this method practical. The relationship between the conditional number and ASR clearly demonstrated the vulnerability in architecture. Empirical results comparing randomly generated arches and manually designed ones confirms the discovered vulnerability.
EVAS shows that attackers could operate in a black box setting without access to the model parameters. This further confirms that the vulnerability is from the architecture.
Most of the different existing backdoor defense types have been examined, showing that this attack can evade certain backdoor defenses.
Vulnerability from the architecture perspective could offer insights for future works on attacks and defenses. Certain architectures are easy/hard to attack or defend. The overall presentation is straightforward. Good work.
Weaknesses
The practical setting is that the victim does not have the computational resource to perform NAS and ask the adversary to provide NAS as a service. In such cases, performance could be the main interest. Based on Figures 2 and 5, it seems like a low-performance arch could also have a low conditional number. Does that have a higher ASR? An analysis of the relationship between the ASR, clean accuracy, and the conditional number would make this paper more comprehensive. I think this part is currently unclear.
How does the
λ
affect the performance?
Based on the experiments, I think the attacker needs access to the training data (or data with a similar distribution) of the victim models to train the generator. However, the threat model claims that victims could train on arbitrary data. This should be further clarified, either in the threat model or in the experiments. If the victim model can use arbitrary data, the model generator for one dataset should also be able to attack the victim model trained on a different dataset.
Clarity, Quality, Novelty And Reproducibility
Clarity: good
Quality: good
Novelty: good
Reproducibility: this might be challenging without the open-source code. |
ICLR | Title
A Massively Parallel Benchmark for Safe Dexterous Manipulation
Abstract
Safe Reinforcement Learning (Safe RL) aims to maximize expected total rewards meanwhile avoiding violating safety constraints. Although a plethora of safetyconstrained environments have been developed to evaluate Safe RL methods, most of them focus on navigation tasks, which are rather simple and have non-trivial gap with real-world applications. For robotics studies, dexterous manipulation is becoming ubiquitous; however, the idea of safe dexterous manipulations are rarely studied in robotics applications. In this paper, we propose TrustDeHands, a massively parallel benchmark for Safe RL studies on safe dexterous manipulation tasks. TrustDeHands is built within the Isaac Gym, a GPU-level parallel simulator that enables highly efficient RL training process. To stay close to real world settings, TrustDeHands offers multi-modal visual inputs, including RGB, RGB-D and point cloud, and supports a variety of arms and dexterous hands from different brands. Moreover, TrustDeHands provides a solid implementation of eight popular safe policy optimization algorithms; this facilitates trustworthy validation for Safe RL methods outside navigation tasks. TrustDeHands include a myriad of challenging tasks that require safety awareness (e.g., Jegna). Results on these tasks show that Safe RL methods can achieve better performance than classical RL algorithms, indicating the effectiveness of Safe RL in safe robot manipulation tasks. To our best knowledge, TrustDeHands is the first benchmark targeting at safe dexterous manipulation. We expect this benchmark to consistently serve as a reliable evaluation suite for future Safe RL developments and further promote the integration between the lines of research of Safe RL and dexterous manipulation. The code and demonstration can be found at https://sites.google.com/view/trustdehands/.
1 INTRODUCTION
Reinforcement Learning (RL) is a powerful way to solve sequence decision problems and has achieved superhuman performance in games (Silver et al., 2016; 2017; Vinyals et al., 2019; OpenAI, 2018), robotic (Andrychowicz et al., 2020; Chen et al., 2022b), and financial (Hambly et al., 2021). Before the RL deploy in the real world, researchers are tasked with proving their trustworthiness to maximize the benefits of AI systems while minimizing their risks (Xu et al., 2022). The fundamental principle of RL is that an agent tries to maximize the cumulative returns by trial and error, but the agents may play dangerous or harmful behaviors during the learning process. Thus, it is important to consider safe exploration that is known as safe RL. Most existing RL simulators usually ignore safety learning issue, where failure is acceptable and even desirable to learn from bad outcomes. But in the real world, such exploration can produce undesired miseries.
In recent years, Robot manipulation (Billard & Kragic, 2019) is an important direction for the application of RL, which covers many reserach (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). Among them, dexterous multi-fingered manipulation is the most challenging task, which puts forward higher requirements for control (Bircher et al., 2017; Rahman et al., 2016). To deal with dynamic environment and policy generalization, OpenAI et al. (2019); Chen et al. (2022a) have studied dexterous manipulation tasks to achieve some significant results. Moreover, trustworthy of the manipulation (Xu et al., 2022) is an important issue needed to be considered. If a robot wants to manipulate in the real world, ensuring safe and trustworthy is the highest priority. But in previous researches, there is no benchmark for safe manipulation. So we propose TrustDeHands, which uses safe policy learning to learn dexterous manipulation, hoping to fill this research gap.
Additionally, many existing Safe RL methods is unavailable, which result in researchers suffer from incorrect implementations, unfair comparisons and misleading conclusions. Safe policy learning is critical in real-world RL applications, where dangerous decisions are undesirable. For example, a robot agent should avoid taking actions that irreversibly damage its hardware (Ray et al., 2019a; Dulac-Arnold et al., 2019). Due to its importance, the community has been actively researching safe policy learning (e.g.,(Alshiekh et al., 2018; Stooke et al., 2020b; Gu et al., 2021; Yuan et al., 2021; Gronauer, 2022; Yang et al., 2022; Liu et al., 2022)). However, most of the existing work mainly focuses on algorithm design. Among these works, either the authors did not publish the source code (e.g. P3O (Zhang et al., 2022)), or the algorithms were implemented using different frameworks(e.g. PCPO (Yang et al., 2020b) in Theano (Al-Rfou et al., 2016), CPPO-PID (Stooke et al., 2020b) in PyTorch), with divergent approaches (FOCOPS (Zhang et al., 2020b) does not parallelize sample collection while others do), and on separate tasks (FOCOPS is tested solely on MuJoCoVelocity (Todorov et al., 2012) and CPPO-PID solely on Safety-Gym (Ray et al., 2019a)). While there exists safety-starter-agents (Ray et al., 2019a) as a publicly available collection of algorithms, it was implemented using TensorFlow1, required old hardware and system, lack recent updates, and was no longer maintained. As a result, the Safe RL community has experienced serious difficulty in reproducing the experimental results, comparing algorithms fairly, and deriving correct insights. An open-source, standardized algorithm implementation for algorithm verification and empirical study is desperately needed.
To facilitate the consideration we mentioned above, we developed a bimanual dexterous manipulation environment: TrustDeHands, with a unified re-implementation of Safe RL algorithms. We highlight three particularly desirable features of TrustDeHands:
• For Safe RL researchers. We provide a series of complex and challenging safe dexterous manipulation tasks. The design of these tasks stems from the need for safety robot manipulation in our daily life (e.g., sweeping the floor without touching other furniture). In these environments, we have done exhaustive experiments with the implemented algorithms and contributed the results, our observations, and analysis for the reference of the community.
• For robotic researchers. We are the first collection of tasks focused on safe dexterous manipulations. In addition to safety research, we also provide a variety of features, including 1) multi-modal information as the policy input (e.g., contact force, RGB image, RGB-D image, point cloud...). 2) customizable dexterous hands and a robotic arm drive to the dexterous hand. These features provide a comprehensive platform for robotic research.
• Unified, highly-optimized, and extensible Safe RL algorithms. We re-implement widely used Safe RL algorithms, which support TrustDeHands and all popular environments in a single welldesigned algorithms framework. We have done maximum abstraction and encapsulation, deriving a similar model structure and update paradigm, thus enabling code reuse, ensuring a clean code style, and making it extremely extensible.
2 RELATED WORK
Safe RL Environments Simulator plays a critical role to the training for RL since it is very expensive to collect data in the real world. Safety-gym (Ray et al., 2019b) introduces a robot that has to navigate through a cluttered environment to achieve a task, which is a suite of complex continuous control environments for Safe RL. Safe-control-gym (Yuan et al., 2021) introduces cart-pole, 1D, and 2D quadrotor dynamic systems to achieve control tasks like stabilization or trajectory tracking, which allows us for constraint specification and disturbance injection onto a robot’s inputs, states, and inertial properties through a portable configuration system. AI Safety Gridworlds (Leike et al., 2017) proposes an environment for evaluating various safe properties of intelligent agents, including safe interruptibility, avoiding side effects, safe exploration, distributional shift, etc. MuJoCoVelocity, originally proposed in (Zhang et al., 2020a), consists of a series of safety tasks like constrainted velocity based on MuJoCo environment (Todorov et al., 2012). However, there still lacks a safe environment for safe robot manipulation, which the difficulty lies in requiring safe highdimensional continuous space control and dealing with the dynamic environment. So we introduce TrustDeHands, which aims to apply Safe RL to dexterous manipulation, providing a more challenging environment for evaluating Safe RL algorithms.
Safe RL Algorithms Since we formulate safe RL under CMDPs (Altman, 1999), in this section we mainly review algorithms w.r.t. CMDPs. For more discussions about safe RL algorithms, please re-
fer to the recent survey (Xu et al., 2022; Gu et al., 2022). With the rise of deep RL, CMDPs are also moving to more high-dimensional continuous control problems. CPO (Achiam et al., 2017b) proposes the general-purpose policy search algorithm for Safe RL with guarantees for near-constraint satisfaction at each iteration. PCPO (Yang et al., 2020a) utilizes a different two-step approach(i.e. first finds the policy with the maximum return, then projects this policy back into the safety region in terms of the minimum KL divergence.). FOCOPS (Zhang et al., 2020a) has adopted a similar idea by directly solving the constrained policy optimization problem via the primal-dual approach (Boyd et al., 2004) then projecting the solution back into the parametric policy space. Traditional robot control also considers the safety problem. Chow et al. (2018; 2019) presents a method via constructing Lyapunov function to guarantees the constraint satisfaction during training. Stooke et al. (2020a) combines PID control with Lagrangian methods which dampens cost oscillations resulting in reduced constraint violations. It is still lacking a unified and efficient framework to cover these algorithms. Therefore, we provide PyTorch-version re-implementations of widely used safe policy optimization algorithms, hoping to facilitate experimental validation in Safe RL research.
Dexterous Manipulation Manipulation is one of the essential research topics in robotics, researchers have long tried to establish a stable theory of manipulation (Billard & Kragic, 2019). However, traditional methods mostly rely on various assumptions, such as knowing the environmental dynamics model or having no uncertainty in the process. In recent years, learning-based approaches have been successful in this regard, coping with uncertainty in perception and even generalizing to unseen objects (Bohg et al., 2013). There are many learning-based benchmarks for robotic manipulation in recent years (Yu et al., 2020; James et al., 2020; Zhu et al., 2020), but none of them use dexterous hands or consider safe constraints. Dexterous multi-finger hands provide intrinsic dexterity for better manipulation in unstructured scenes and contact-rich situations, but additionally bring the challenges of high-dimensional control and complex contact models (Bircher et al., 2017; Rahman et al., 2016). Previous research methods have mostly focused on trajectory optimization or model prediction, which highly relied on accurate dynamics models (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). For example, Williams et al. (2015) performs in-hand manipulation of a cube using a trajectory optimization technique known as Model Predictive Path Integral (MPPI). Charlesworth & Montana (2021) extended the MPPI method to allow objects to be thrown and catch between two hands. OpenAI et al. (2019) solved a Rubik’s cube using model-free RL and domain randomization techniques. Chen et al. (2022a) proposed an in-hand manipulation system to learn how to manipulate a large number of objects of different shapes, and even generalize to unseen objects. Qin et al. (2022; 2021) studied dexterous manipulation learning from human demonstration. Chen et al. (2022c) studied the bimanual dexterous manipulation to solve cooperative manipulation and skill generalization problem. While most of them focus on unconstrained dexterous manipulation, how to do dexterous manipulation safely is an unstudied topic. In this paper, we provide a massively parallel benchmark for safe dexterous manipulation, hoping to facilitate research on how to manipulate safely.
3 THE SAFETY LEARNING ENVIRONMENT
TrustDeHands is consist of two parts: the safety learning environment and the safe policy optimization algorithms. In this section, we present the high-level design of a safety learning environment.
3.1 SYSTEM DESIGN AND DATASETS
TrustDeHands is a collection of challenging dexterous manipulation tasks, underpinned by Isaac Gym (Makoviychuk et al., 2021) and capable of high parallelism on the GPU. In TrustDeHands, all tasks require two dexterous hands to manipulate one or more objects. We design a series of tasks that require policies to perform safe dexterous manipulation, including throwing, grasping, jerking, pulling, etc. At the same time, each task provides the customizability of dexterous hands and objects to support a diverse task.
The construction of the dataset includes the configuration of dexterous hands and objects. The core goal of our dataset is to generate a wide variety of scenarios for learning constrained dexterous manipulation. We collected a variety of dexterous multi-finger hands as manipulators, including most of the dexterous hands currently used in robotics. In addition to manipulators, objects also play a crucial role in building datasets. Our manipulation objects are mainly from the YCB (Calli et al., 2017) and SAPIEN (Xiang et al., 2020) datasets. Both datasets contain many objects used in everyday life.
3.2 TASKS REPRESENTATION
TrustDeHands contains 10+ tasks focused on dexterous manipulation. Each task contains two dexterous hands and one or more manipulated objects, such as balls, blocks, etc., with the ultimate goal is to manipulate objects placed at the task-specified locations while to make the agent satisfy. The default dexterous hand used by our framework is the Shadow Hand (ShadowRobot, 2005), more details are provided in Appendix A. The agent performs each task according to its observation, action representation, and its reward and cost function definition. We provide more underlying technical details about the tasks in Appendix B.
Constrained Markov Decision Processes (CMDPs) Constrained Markov Decision Processes (CMDPs) is defined as (S,A,P, r, ρ0, γ, C), where S is the state space, A is the action space, P : S × A × S → [0, 1] is the transition probability function, r : S × A × S → R is the reward function, ρ0(·) ∈ P(S) is the initial state distribution (P(X) denotes the set of probability distributions over a set X), γ ∈ [0, 1) is the discount factor, and C = {(c, b)} is the constraint set, where c : S ×A× S → R is a cost function, and b is the cost threshold. We use π : S → P(A) to denote a stationary policy, and use Π to denote the set of all stationary policies. Let τ = {st, at, rt+1, ct+1}t≥0 ∼ π be a trajectory generated by π, where s0 ∼ ρ0(·), at ∼ π(·|st), st+1 ∼ P(·|st, at), rt+1 = r(st+1|st, at), and ct+1 = c(st+1|st, at). The state value function of π is defined as Vπ(s) = Eπ[ ∑∞ t=0 γ
trt+1|s0 = s]. The goal of reinforcement learning is to maximize the expected total reward, defined as J(π) = Es∼ρ0(·)[Vπ(s)].
We define the cost return function as Jc(π) = Es∼ρ0(·) [ ∑∞ t=0 γ tct+1|s0 = s], and the feasible policy set ΠC as ΠC = {π |π ∈ Π, Jc(π) ≤ b,∀(c, b) ∈ C }. The goal of Safe RL is to learn the optimal policy π⋆ such that
π⋆ = arg max π∈ΠC J(π). (1)
Observation Here we briefly describe the observation space of the tasks, more details can be seen in Appendix B.1. The observation of all tasks consisted of three parts: state information of the left and right Shadow Hand, and information about the task specification. In each task, the state information of the left and right Shadow Hand is the same, each Shadow Hand contains 24 minimum drive units (which contains four underdriven fingertip units) and its state consists of the following information:
• Dp,Dv,Df ∈ R24, corresponds to all joint DoF (Degree of Freedom) of angle, velocity, and force with drive units, respectively.
• Pw,Rw ∈ R3 represents the position and rotation of the base of the hand. • FT i = [FTpose, FTvl , FTva , FTf , FTt] ∈ R19, corresponds to the pose, linear velocity, angular
velocity, force magnitude, and torque of each fingertip, respectively.
• A ∈ R20/26, indicates the action executed by the hand in the previous step, which is consistent with the action space.
With above definitions, the state information of one Shadow Hand can be represented as Hand = {Dp,Dv,D,Pw,Rw, {FT i}5i=1,A}. We character the observation of each task by the following information:
{Handleft, Handright, Gtask}, (2) where Gtask represents some observation information specific to different tasks.
Action The dual Shadow Hands have more than 40 dimensions of action space, where each Shadow Hand has five fingers with a total of 24 degrees of freedom, the thumb has 5 joints and 5 degrees of freedom, and all other fingers have 3 degrees of freedom and 4 joints (where the joint at the end of each finger is uncontrollable). Therefore, the action space of each hand is 20 dimensions. If the base of the hand is not fixed, there are six dimensions to represent the translation and rotation of the hand base. For the lower and upper limit of the joint angle, see Table 2. In each step, we use the absolute value of each joint angle as the target, and use the PD controller to make it move.
Reward We designed some auxiliary rewards to help RL agents learn more consistently, and each task contains a task-specific bonus. In general, our reward design is goal-based and follows the same set of logic. For object-catching tasks, our reward is simply related to the difference between
the pose of the object and the target. For other tasks that require the hand to hold the object, our reward generally consists of three parts: the distance from the left hand to a left-hand grip point on the object, the distance from the right hand to a right-hand grip point on the object, and the distance from the object to the object’s target.
Cost Each task contains different constraints (e.g., the ball needs to be thrown to a specified height or a specified angle to prevent damage to other items; the robots need to clean the floor without hitting other furniture). The specific constraint design depends on the safety requirements of each task.
3.3 EVALUATION SUITE
TrustDeHands has a collection of 10+ different tasks. These tasks form an evaluation suite for benchmarking the performance of Safe RL algorithms. In this section, we describe six of these representative tasks (see Figure 1), and the remaining tasks are described in detail in Appendix B.
Safe Finger In this task, one hand needs to throw the ball to the other hand, and both hands except the same horizontal plane. We design two different constraints, including constraints on the minimum drive units, and constraints on the finger joints.
Hand Up Wall In this task, one hand needs to throw the ball at a certain height to the other hand. The constraint of this task is concerned with the magnitude of the force of the minimum drive unit.
Hand Over Wall In this task, one hand needs to throw the ball at a certain angle to the other hand. It is more concerned with the constraints in a certain behavioral paradigm and needs to take into account the collaboration of the whole hand-driven unit.
Pick Bottles There are five bottles in a tight row, and the dual hands need to pick up two of the bottles smoothly without touching the others.
Jenga The dual hands need to collaborate to extract the specified blocks from an unstable stacking structure and avoid breaking the rest of the blocks apart.
Clean House There is a broom, trash, and dustpan in this environment. We need to use both hands to manipulate the broom to sweep the trash into the dustpan. There will also be a chair as an obstacle on the way.
3.4 VISUAL INFORMATION
It is very difficult to obtain the state information of the robot in the real world. One way to solve this problem is to use the vision sensor as the input to train the policy. Therefore, we provide multiple modalities of visual information as input, including RGB, RGB-D, and point cloud, see Figure 2. It is generated using the camera in the Isaac Gym, and the pose and toward of the camera can be customized by the user to obtain the desired visual information. We also propose a point cloud parallel acceleration function to adapt Isaac Gym and provide an example of using it to train Hand Over task, see Appendix.
3.5 CUSTOMIZABLE DIVERSIFORM MANIPULATORS AND ADAPTATION CHALLENGE
There are more type of dexterous hands than the shadow hand like allegro hand, trifinger, et al, and supporting other dexterous hands helps to advance research and community development. Therefore, in addition to the Shadow Hand, we also provide five kinds of other dexterous multi-finger hands in TrustDeHands. In addition, using a robotic arm drive at the base of the dexterous hand not only matches the real-world setting but also an important step for sim to real transfer. Becasue it is very difficult to match the real dynamics of the flying hand, the TrustDeHands provide a way to reduce the reality gap by adjusting the dynamics and physics parameters of the arm, which simplifies the deployment process from simulation to the real world applications.
Moreover, we offer a variety of arms and a variety of dexterous hand combinations, which has many benefits. For example, researchers can choose the hand they want according to their own conditions, which brings wider applicability to our benchmark. At the same time, we can use different arms and different hands to study the adaptability and generalization ability of policies, which challenges multi-task learning and meta learning research in the future. An schematic of this feature is shown in Figure 3.
4 EXPERIMENTS
4.1 SAFE RL ALGORITHMS IMPLEMENTATION
Based on their original papers or public code base, we reimplement eight algorithms (CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020)), covering major safe policy optimization algorithms. A brief introduction to each algorithm is given in Appendix E.
We abstract similar structure of the safe policy optimization algorithms, and modularize the code into interaction with environments, parallel sample collection, buffer storage and computation, algorithm core update, and auxiliary functionalities such as visualization and logger. Maximum abstraction and encapsulation take place at the implementation of algorithms core, where each algorithm inherits directly from its
base algorithm, thus only unique features have to be implemented and all other code can be reused. An overview of the core of algorithms and logger is shown in Figure 4.
For algorithms implementation, it is critical to ensure its correctness and reliability. To achieve this goal, we examine the implementation of our algorithms carefully. To test the performance of our implementation, we run the eight algorithms on 30 tasks (for a complete list of the tasks, please refer to Appendix F.1) contained in the four environment suites and present our experimental results for the reference of the community in Appendix F.2.
4.2 EVALUATION PROTOCOL
Metrics We define the following metrics to depict the safety performance of an agent in different tasks. (1) the average return of trajectories, Jr(θ); (2) the average cumulative cost of trajectories, Jc(θ). In Safe RL domain, for any two agents, the superiority of the agents is determined by the following priority comparisons. On the one hand, the agent that satisfies the constraint will definitely outperform the unconstrained one. On the other hand, two agents that satisfy the constraint are determined by comparing the magnitude of their cumulative returns.
Algorithms For the UnSafe RL algorithm we uniquely use PPO (Schulman et al., 2017), where the reward function contains no information about the auxiliary costs. For Safe RL algorithms, we evaluate the performance of PPO-Lag (Ray et al., 2019b), FOCOPS, P3O, and PCPO algorithms on TrustDeHands, and the remaining Safe RL algorithms we implemented are in our anonymous Github repository.
4.3 RESULTS
We mainly conduct three experiments and analyze the results in this section: 1) The performance of PPO, PPO-Lag, FOCOPS, P3O, CPPO-PID, and algorithms on six representative tasks 2) The performance of eight Safe RL algorithms on the Safe Finger task 3) The performance of point cloud RL on the Hand Over Wall task.
For 1), We evaluate the performance of PPO, PPO-Lag, FOCOPS, CPPO-PID, and P3O algorithms on six tasks, and we implemented the rest of the Safe RL algorithms in our anonymous Github repository. The performance of each algorithm is shown in Figure 5. It can be observed that PPOLag can achieve high performance within the range allowed by the cost, and is the best performing algorithm here. Comparing the performance of PPO and PPO-Lag, it can be found that PPO-Lag can perform similarly to PPO in Jenga, and Safe Finger tasks, but the cost is constrained to a lower range, which indicates that the model has learned how to safely manipulate. A remarkable result is that in the Janga and Pick Bottle task, the performance of PPO-Lag is far superior to PPO. This
For 2), On the other hand, we tested the entire eight algorithms we implemented on the Safe Finger task, which is shown in Figure 1. We the specific detials can be found in Appendix C.
It can be seen that CPO has almost no work, and the PPO algorithm will cause a high cost. This may be due to various approximation methods in CPO, which puts an emergency on Safe RL towards complex manipulation areas.
5 POTENTIAL RESEARCH PROBLEMS TO STUDY USING TRUSTDEHANDS
TrustDeHands provides ample opportunities to study trustworthy manipulation of dexterous hands based on Safe RL. We found that the primal-dual based approach (Boyd et al., 2004) result in great volatility in the update of lagrange multipliers. A potential research direction is to consider combining feedback control methods in control systems, such as PID (Ziegler et al., 1942; Ang et al., 2005), ADRC (Han, 2009), etc., to mitigate the instability and volatility of Lagrange multipliers in the learning process. Therefore, it would be interesting to combine control methods of complex systems with Safe RL methods to solve complex manipulation problems of dexterous hands.
Sim to real is an important research direction about transferring the simulation result to the real robot. Around this theme, our benchmark includes many of the robot arms and dexterous hands, which were accepted by many research labs. It is convenient for different researchers to choose their own arms and hands for training in the simulation. Meanwhile, the tasks in our benchmark, such as picking bottles1, Jenga2, etc., are meaningful in the real world but also needed to ensure safety if transferring the trained policy from simulation to the real world. So our benchmark can also be used to study how to perform sim to real more safely from the perspective of Safe RL.
Training policy with a state-based observation space is difficult for sim to real transfer because such inputs are not available in the real world. So it also makes sense to study the more readily available policy inputs in the real world, such as point clouds. Our environment supports a multimodal input such as visual and forces information, which can support research in this direction. We hope that our benchmark can serve as a tool to study the sim to real transfer of dexterous hands.
Finally, generalization is an important direction to explore, which is a potential strength of RL. TrustDeHands supports self-customization, enabling switching and linking different hands and arms to evaluate the generality of different algorithms. Users can use TrustDeHands as a platform for modification or secondary development to design richer and more challenging target tasks, and we hope that this work will contribute to the flourishing of the RL community.
6 CONCLUSION AND FUTURE WORK
In this work, we presented TrustDeHands, which is the first benchmark focused on safe dexterous manipulation. We standardize the safe policy optimization methods for solving CMDPs and introduce a unified, highly-optimized, extensible, and comprehensive algorithms re-implementation. We checked the correctness of our algorithms on the existing Safe RL benchmark and tested it on TrustDeHands. The results show that the Safe RL algorithm can better solve the safety problem in dexterous manipulation. For example, using Safe RL can grab the target bottle without touching other bottles, and avoid collision with obstacles when sweeping the floor. However, it is difficult for unconstrained RL algorithms to have this guarantee. These situations are very important for robots in real-world environments because RL-based methods tend to lead to unpredictable behaviors that are prone to danger and damage to robots.
Additionally, we support two features regarding visual policy input and various arms and dexterous hands. Some sort of visual input is becoming increasingly common in real-world RL-trained robots, and so benchmarks for this setting are important. Diverse arms and hands increase the applicability of our benchmarks and allow us to study policy generalization between different robots.
We believe that TrustDeHands can significantly accelerate the progress of future research on safe manipulation, facilitate the integration of reinforcement learning with robotic control, and will make a singular contribution to the reinforcement learning community.
B TASK SPECIFICATIONS
B.1 BASIC STATE SPACE AND ACTION SPACE
The state space dimension of each environment is up to 400 dimensions in total, and the action space dimension is up to 40 dimensions. All environments are goal-based, and each epoch will randomly reset the object’s starting pose and target pose to improve generalization. We only use the shadow hand and object state information as observation at present. The observation of all tasks is composed of three parts: the state information of the left and right hands, and the information of objects and
target. The state information of the left and right hands were the same for each task, including hand joint and finger positions, velocity, and force information. The state information of the object and goal are different for each task, which we will describe in the following. Table 4 shows the specific information of the left-hand and right-hand state.
B.2 SAFE FINGER
This environment contains two dexterous hands. At the beginning of each episode, a ball falls randomly around the right hand, and the two hands have to collaborate to place the ball to a given position. Since the target is out of the reach of the right hand, and the right hand cannot pass the ball to the left hand directly, a possible solution is that the right hand grabs the ball, throws it to the left hand; the left hand catches the ball, and puts it to the target. Note that the base of the hand is fixed.
Observations The 398-dimensional observational space for Hand Over task is shown in Table 5. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Safe Finger task is shown in Table 6.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (3) where α is a constant balances positional and rotational rewards.
Cost In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to figure 6 (b)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③, ④, and the cost is defined as:
ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (4)
B.3 HAND UP WALL
Similarly, this environment is similar to the Hand Over Wall, the difference is that the wall in this environment only retains the lower half, so the ball needs to be thrown high to prevent it from hitting the wall, requiring different motion skill.
Observations The 398-dimensional observational space for Hand Up Wall task is shown in Table 7. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Up Wall task is shown in Table 8.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (5) where α is a constant balances positional and rotational rewards.
Cost The ball is thrown from the right hand to the left hand, and the curve of the ball throwing out is not consistent, and we set a wall with height in the middle of the two hands. This requires more delicate hand manipulation, and when the right hand does not throw the ball with the proper force or angle, it will be difficult to throw the ball over the wall. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.4 HAND OVER WALL
This environment is similar to Safe Finger, except that it has a wall between each hand and a hole in the middle of the wall. We need to learn policy to keep the ball from hitting the wall during the toss.
Observations The 398-dimensional observational space for Hand Over Wall task is shown in Table 9. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Over Wall task is shown in Table 10.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (6) where α is a constant balances positional and rotational rewards.
Cost This constraint is more demanding than Wall Down, where we require the ball thrown to fit through a specified narrow hole. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.5 PICK BOTTLES
This environment contains two hands, a table and five bottles. The five bottles were placed in a row on the table horizontally with very little space between them. We need to pick up two bottles with two dexterous hands, and not touch the bottle around it to cause possible damage.
Observations The 400-dimensional observational space for Pick Bottles task is shown in Table 11. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Pick Bottles task is shown in Table 12.
Reward The reward consists of three parts: the distance from the left hand to the left bottle cap, the distance from the right hand to the right bottle cap, and the height of the two bottles that need to be picked. The height of the two bottles that need to be picked is given by dheight. The position difference between the left hand to the left bottle cap dleft is given by dleft = ∥xlhand − xlbcap∥2.The position difference between the right hand to the right bottle cap dright is given by dright = ∥xrhand − xrbcap∥2. The reward is given by this specific formula:
r = dheight ∗ 20− dleft − dright (7)
Cost The constraint of this environment is that we can’t touch other bottles when we pick the bottle. When our hand, or the bottle we picked, touches other bottles, the cost is set to 1, otherwise it is 0.
B.6 JENGA
Jenga is a fitness game which is very suitable for Safe RL algorithm evaluation. Players take turns removing one block at a time from a tower made up of many blocks. In this environment, we need to remove the one we want from the 16 blocks without knocking over the others.
Jenga The 411-dimensional observational space for Jenga task is shown in Table 13. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Jenga task is shown in Table 14.
Reward For timestep t, let xb,t as the position of the left middle finger, xg,t as the position of the left end of the object, and dp,t = ∥xb,t − xg,t∥2. Define dy,t as the y-axis direction of the position of the object center, the reward is defined as follows:
rt = 30 ∗ (dy,t + 0.6)− dp,t (8)
Cost The constraint of this environment is that we can not touch other blocks in the Jenda. The cost is 1 if all blocks move more than 0.01 cm, and 0 otherwise.
B.7 CLEAN HOUSE
This environment is in a scene we usually clean at home. We need to control the broom with both hands to sweep the trash from the ground into the dustpan without touching other furniture (e.g. chairs).
Observations The 431-dimensional observational space for Clean House task is shown in Table 15. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Clean House task is shown in Table 16.
Reward The reward consists of four parts: the distance from the left hand to the left handle position of the broom, the distance from the right hand to the right handle position of the broom, the object (trash) position to the bottom of broom position, and the distance from the object to the target (dustpan) point. The distance from the object to the target point is given by dtarget. The position difference from the left hand to the left handle position of the broom is given by dleft. The position difference from the right hand to the right handle position of the broom is given by dright. The object position to the bottom of broom position is given by dbottom. The reward is given by this specific formula:
r = 50− dtarget ∗ 10− 5 ∗ dleft − 5 ∗ dright (9)
Cost The constraint of this environment is that we can not damage other furniture when we sweep the floor. So there is a chair in the path of the trash and the dustpan. The cost is 1 when the broom touches the chair and make it move, and 0 otherwise.
C FOUR ENVIRONMENTS IN SAFE FINGER.
All environments are comes from Safe Finger. The difference between Safe Finger and Safe Joint is whether it is a joint constrainedor a finger constrained, which is described as follows:
Safety Joint. In these tasks, we constrain the freedom of joint ④ of forefinger (please refer to Figure 6 (a) and (f)). Without the constraint, joint ④ has freedom of [−20◦, 20◦]. The safety tasks restrict joint ④ within [−10◦, 10◦]. Let ang 4 be the angle of joint ④, and the cost is defined as:
ct = I(ang 4 ̸∈ [−10◦, 10◦]). (10)
Safety Finger. In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to Figure 6 (b) and (f)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③,
④, and the cost is defined as: ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (11)
Hand over stands for the situation that two Shadow Hands with palms facing up, opposite each other, and an object that needs to be passed in Safe Finger, and it stands the object that needs to be thrown from the vertical hand to the palm-up hand in Safe Finger. Specific information can refer to Appendix B.2.
D POINT CLOUD
We replace the object state information with point clouds in the case of 128 parallel environments. The point cloud is captured by the depth camera and downsampled to 2048 points. The features are extracted using PointNet (Qi et al., 2017) to a 128-dimensional vector and concate with other observations. It can be seen that under the same episode and the same number of environments, the performance of point cloud input is not as good as full state input, but it can also achieve some performance. But also using an RTX 3090 GPU, the point cloud RL has only 200+ fps, and the full state can reach 30000+. In fact, we can only open up to 128 environments when using point clouds. This was a problem with Isaac Gym’s poor parallel support for cameras. We further refined the method to enhance the parallelization of the point cloud extraction in order to close this gap. When compared to Isaac Gym’s original code, the speedup is 1.46 times, going from 232 fps to 339 fps.
E DETAILS OF BENCHMARK ALGORITHMS
In this section, we review the key steps of typical Safe RL algorithms implemented in this benchmark, which include CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020). We implemented all of these algorithms and check the correctness but only evaluated some of them in TrustDeHands. Firstly, we will give a brief introduction to these algorithms below and give the hyperparameters of the algorithms we used in our evaluation. Then we have verified our re-implementations in other Safe RL benchmarks.
E.1 CPO (ACHIAM ET AL., 2017A)
For a given policy πθk , CPO updates new policy πθk+1 as follows:
πθk+1 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] (12)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b, (13)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ. (14)
It is impractical to solve the problem (12) directly due to the computational cost. Achiam et al. (2017a) suggest to find some convex approximations to replace the term Aπθk (s, a) and D̄KL(πθ, πθk) Eq.(12)-(14). Concretely, Achiam et al. (2017a) suggest to use first-order Taylor expansion of J(πθ) to replace the objective (12) as follows,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Aπθk (s, a) ] = J(πθ)− J(πθk) ≈ (θ − θk)⊤∇θJ(πθ).
Similarly, Achiam et al. (2017a) use the following approximations to turn the constrained policy optimization (12)-(14) to be a convex problem,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Acπθk (s, a) ] ≈ (θ − θk)⊤∇θJc(πθ), (15)
D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (16) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ,
Eq.(16) is the second-oder approximation of (14).
Let λ⋆, ν⋆ is the dual solution of the following problem
λ⋆, ν⋆ = arg max λ≥0,ν≥0 { −1 2λ ( g⊤H−1g − 2νr + sv2 ) + νc− λδ 2 } ;
where g = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] , a = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] , r = g⊤Ha, s = a⊤H−1a, and c = Jc(πθk)− b. Finally, CPO updates parameters according to conjugate gradient as follows: if approximation to CPO is feasible, then
θk+1 = θk + 1
λ⋆ H−1(g − ν⋆a),
else,
θk+1 = θk − √
2δ
a⊤H−1a H−1a.
E.2 PCPO (YANG ET AL., 2020B)
Projection-Based Constrained Policy Optimization (PCPO) is an iterative method for optimizing policies in a two-step process: the first step performs a local reward improvement update, while the second step reconciles any constraint violation by projecting the policy back onto the constraint set.
Reward Improvement.
πθ k+1
2 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] ,
s.t.D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ;
Projection.
πθk+1 = arg min πθ∈Πθ
D ( πθ, πθ
k+1 2
) ,
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b.
Then, Yang et al. (2020b) follows CPO (Achiam et al., 2017a) uses convex approximation to original problem, and calculate the update rule as follows,
θk+1 = θk −
√ 2δ
g⊤H−1g H−1g −max 0, √
2δ
g⊤H−1g a⊤H−1g + c
a⊤L−1a L−1a, where L = I if D is ℓ2-norm, and L = H if D is KL-divergence.
E.3 FOCOPS (ZHANG ET AL., 2020B)
Zhang et al. (2020b) propose the First Order Constrained Optimization in Policy Space (FOCOPS) that is a two-step approach. We present it as follows.
Step1: Finding the optimal update policy.
Firstly, for a given policy πθk, FOCOPS finds an optimal update policy π⋆ by solving the optimization problem (12)-(14) in the non-parameterized policy space.
π⋆ = argmax π∈Π Es∼dρ0πθk (·),a∼π(·|s) [ Aπθk (s, a) ] (17)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼π(·|s) [ Acπθk (s, a) ] ≤ b, (18)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(π, πθk)[s]] ≤ δ. (19)
If πθk is feasible, then the optimal policy for (17)-(19) takes the following form:
π⋆(a|s) = πθk(a|s) Zλ,ν(s) exp
( 1
λ
( Aπθk (s, a)− νA c πθk (s, a) )) , (20)
where Zλ,ν(s) is the partition function which ensures (20) is a valid probability distribution, λ and ν are solutions to the optimization problem:
min λ,ν≥0 λν + νb̃+ λEs∼dρ0πθk (·),a∼π⋆(·|s) [Zλ,ν(s)] ,
the term b̃ = (1− γ)(b− Jc(πθk)). Step 2: Projection.
Then, FOCOPS projects the policy found in the previous step back into the parameterized policy space Πθ by solving for the closest policy πθ ∈ Πθ to π⋆ in order to obtain πθk+1 :
πθk+1 = arg min πθ∈Πθ Es∼dρ0πθk (·) [KL(πθ, π ⋆)[s]].
We usually apply stochastic gradient decent to obtain the solution of above θk+1.
E.4 PPO-LAG
The Lagrangian approach is a standard way to solve CMDP (1), which is also known as primal-dual policy optimization:
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (21)
TRPO-Lag and PPO-Lag combine the Lagrangian approach with TRPO and PPO. Concretely, PPO using the following clip term to replace J(π) in (21),
Lrclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] , where πk is short for πθk . With Aπk(s, a) replacing A c πk (s, a) respectively, and obtain Lcclip as follows,
Lcclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) }] . Then, PPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ
{ Lrclip(πθ)− λ ( Lcclip(πθ)− b )} . (22)
All of the above terms can be estimated according to the policy πk. Then PPO-Lag updates the policy according to first-order optimizer as follows
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (23)
λk+1 = λk + η ( Lcclip(πθ)− b ) + ∣∣ θ=θk
, (24) where η > 0 is step-size.
E.5 TRPO-LAG
TRPO-Lag shares a similar idea but it is adaptive to TRPO, where TRPO-Lag replaces J(πθ) as follows, J(πθ) ≈ J(πθk) + (θ − θk)⊤∇θJ(πθ) =: Lr(πθ). (25) Similarly, Jc(πθ) ≈ Jc(πθk) + (θ − θk)⊤∇θJc(πθ) =: Lc(πθ), (26) and D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (27) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] .
Then, TRPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ {Lr(πθ)− λ (Lc(πθ)− b)} ∣∣ θ=θk,λ=λk , (28)
where the policy parameter θ satisfies the following condition (θ − θk)⊤H(θ − θk) ≤ δ.
E.6 P3O (ZHANG ET AL., 2022)
P3O solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem as follows,
πk+1 = arg min π∈Πθ { Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Aπk(s, a) ] + κB(π, b) } , (29)
where κ is a positive scalar, and the penalty term B(π, b) is defined as follows,
B(π, b) = max { 0,Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Acπk(s, a) ] + (1− γ) (Jc(πk)− b) } . (30) P3O utilizes a simple yet effective penalty approach to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective.
For the practical implementation, P3O consider the following optimization objective: LP3O(θ) = LrP3O(θ) + κmax {0,LcP3O(θ)} , (31) where
LrP3O(θ) = E [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] ,
LcP3O(θ) = E [ max { πθ(a|s) πk(a|s) Acπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) } + (1− γ) (Jc(πk)− b) ] , the notation E[·] is short for Es∼dρ0πk (·),a∼πk(·|s)[·]. All of the term in (31) can be estimated according to the samples selected by πk.
For each round, P3O chooses the parameter adaptively according to the following rule: κ← min {ρκ, κmax} , (32) where ρ > 1 and κmax is a positive scalar.
Finally, P3O updates the policy parameter as follows,
θ ← θ − η ∂ ∂θ LP3O(θ). (33)
E.7 IPO (LIU ET AL., 2020)
IPO considers the objective with logarithmic barrier functions (Nemirovski, 2004) to learn the safe policy. Concretely, IPO considers the following way to update policy,
πk+1 = arg max π∈Πθ
{ Lrclip(π) + ϕ(π) } , (34)
where the clip objective is Lrclip(π), and ϕ(π) is the logarithm barrier function with respect to the CMDP problem,
ϕ(π) = 1
m log
( b− Jc(π) ) , (35)
where m > 0 is a hyper-parameter that needs to be tuned.
E.8 CPPO-PID (STOOKE ET AL., 2020B)
CPPO-PID (Stooke et al., 2020b) also considers the primal-dual policy optimization method to solve the CMP problem,
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (36)
The main difference between CPPO-PID and previous Lagrangian methods is that replacing the update rule (??), CPPO-PID considers PID control technique (Johnson & Moradi, 2005) to update Lagrange multiplier.
First, CPPO-PID updates the parameter θ as follows,
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (37)
where η > 0 is step-size.
Then, CPPO-PID updates the parameter λ according to PID method (Algorithm 2 in (Stooke et al., 2020b)),
λk+1 = PID(KP ,KI ,KD, πθk , n), (38) where KP ,KI ,KD are the parameter needed to be tuned, and n is the iteration number.
F CORRECTNESS VERIFICATION
F.1 TASK LIST
Below are four safety environments that provide interesting tasks for Safe RL. Among them, three are popular safety environments, MuJoCo-Velocity (Todorov et al., 2012), Safety-Gym (Ray et al., 2019a), Bullet-Safety-Gym (Gronauer, 2022). We test the 8 algorithms on 26 tasks in these safety environments to verified our re-implementations on TrustDeHands. The complete list of these tasks is shown in table 17.
F.2 TASK PERFORMANCE
we show the performance of our re-implementations on the tasks in 18, 19, 20, 21, 22, 23. | 1. What is the focus and contribution of the paper regarding TrustDeHands?
2. What are the strengths and weaknesses of the proposed benchmark suite, particularly in terms of its purpose and engineering efforts?
3. Do you have any concerns about the similarity between the proposed approach and prior works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
In this paper, the authors propose a benchmark suite for safety-aware dexterous manipulation, called TrustDeHands. TrustHand is built within the Isaac Gym, with GPU-based massively parallel simuation capability. Several safety-aware RL algorithms are implemented.
Strengths And Weaknesses
Strength:
Safty-aware dexterous manipulation is important. It would be valuable to provide a benchmark for the academic community.
The appendix is detailed with non-trivial engineering efforts.
Weakness:
This paper does not serve well for the purpose of building a benchmark suite for safety-aware dexterous manipulation.
For safety-aware RL gyms, accurate modeling and complete coverage would be more important than massively parallel simulation.
Massively parallel simulation is a tech breakthrough by Isaac Gym, and can be an add-on for your task, but not the main course.
This manuscript borrows some ideas from the following paper but does not provide substantial new contributions. From my readings, it looks like a repurposing of it to claim a new area. However, merging the two would be more valuable to the RL community. Chen, Yuanpei, et al. "Towards Human-Level Bimanual Dexterous Manipulation with Reinforcement Learning." Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track.
Clarity, Quality, Novelty And Reproducibility
The appendix is detailed and clear.
The novelty is incremental, as mentioned in above that it largely borrows from a previous published paper.
Reproducibility is good.
Quality of this work is not satisfactory, since it does not serve well for the goal of safety-aware dexterous manipulation. |
ICLR | Title
A Massively Parallel Benchmark for Safe Dexterous Manipulation
Abstract
Safe Reinforcement Learning (Safe RL) aims to maximize expected total rewards meanwhile avoiding violating safety constraints. Although a plethora of safetyconstrained environments have been developed to evaluate Safe RL methods, most of them focus on navigation tasks, which are rather simple and have non-trivial gap with real-world applications. For robotics studies, dexterous manipulation is becoming ubiquitous; however, the idea of safe dexterous manipulations are rarely studied in robotics applications. In this paper, we propose TrustDeHands, a massively parallel benchmark for Safe RL studies on safe dexterous manipulation tasks. TrustDeHands is built within the Isaac Gym, a GPU-level parallel simulator that enables highly efficient RL training process. To stay close to real world settings, TrustDeHands offers multi-modal visual inputs, including RGB, RGB-D and point cloud, and supports a variety of arms and dexterous hands from different brands. Moreover, TrustDeHands provides a solid implementation of eight popular safe policy optimization algorithms; this facilitates trustworthy validation for Safe RL methods outside navigation tasks. TrustDeHands include a myriad of challenging tasks that require safety awareness (e.g., Jegna). Results on these tasks show that Safe RL methods can achieve better performance than classical RL algorithms, indicating the effectiveness of Safe RL in safe robot manipulation tasks. To our best knowledge, TrustDeHands is the first benchmark targeting at safe dexterous manipulation. We expect this benchmark to consistently serve as a reliable evaluation suite for future Safe RL developments and further promote the integration between the lines of research of Safe RL and dexterous manipulation. The code and demonstration can be found at https://sites.google.com/view/trustdehands/.
1 INTRODUCTION
Reinforcement Learning (RL) is a powerful way to solve sequence decision problems and has achieved superhuman performance in games (Silver et al., 2016; 2017; Vinyals et al., 2019; OpenAI, 2018), robotic (Andrychowicz et al., 2020; Chen et al., 2022b), and financial (Hambly et al., 2021). Before the RL deploy in the real world, researchers are tasked with proving their trustworthiness to maximize the benefits of AI systems while minimizing their risks (Xu et al., 2022). The fundamental principle of RL is that an agent tries to maximize the cumulative returns by trial and error, but the agents may play dangerous or harmful behaviors during the learning process. Thus, it is important to consider safe exploration that is known as safe RL. Most existing RL simulators usually ignore safety learning issue, where failure is acceptable and even desirable to learn from bad outcomes. But in the real world, such exploration can produce undesired miseries.
In recent years, Robot manipulation (Billard & Kragic, 2019) is an important direction for the application of RL, which covers many reserach (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). Among them, dexterous multi-fingered manipulation is the most challenging task, which puts forward higher requirements for control (Bircher et al., 2017; Rahman et al., 2016). To deal with dynamic environment and policy generalization, OpenAI et al. (2019); Chen et al. (2022a) have studied dexterous manipulation tasks to achieve some significant results. Moreover, trustworthy of the manipulation (Xu et al., 2022) is an important issue needed to be considered. If a robot wants to manipulate in the real world, ensuring safe and trustworthy is the highest priority. But in previous researches, there is no benchmark for safe manipulation. So we propose TrustDeHands, which uses safe policy learning to learn dexterous manipulation, hoping to fill this research gap.
Additionally, many existing Safe RL methods is unavailable, which result in researchers suffer from incorrect implementations, unfair comparisons and misleading conclusions. Safe policy learning is critical in real-world RL applications, where dangerous decisions are undesirable. For example, a robot agent should avoid taking actions that irreversibly damage its hardware (Ray et al., 2019a; Dulac-Arnold et al., 2019). Due to its importance, the community has been actively researching safe policy learning (e.g.,(Alshiekh et al., 2018; Stooke et al., 2020b; Gu et al., 2021; Yuan et al., 2021; Gronauer, 2022; Yang et al., 2022; Liu et al., 2022)). However, most of the existing work mainly focuses on algorithm design. Among these works, either the authors did not publish the source code (e.g. P3O (Zhang et al., 2022)), or the algorithms were implemented using different frameworks(e.g. PCPO (Yang et al., 2020b) in Theano (Al-Rfou et al., 2016), CPPO-PID (Stooke et al., 2020b) in PyTorch), with divergent approaches (FOCOPS (Zhang et al., 2020b) does not parallelize sample collection while others do), and on separate tasks (FOCOPS is tested solely on MuJoCoVelocity (Todorov et al., 2012) and CPPO-PID solely on Safety-Gym (Ray et al., 2019a)). While there exists safety-starter-agents (Ray et al., 2019a) as a publicly available collection of algorithms, it was implemented using TensorFlow1, required old hardware and system, lack recent updates, and was no longer maintained. As a result, the Safe RL community has experienced serious difficulty in reproducing the experimental results, comparing algorithms fairly, and deriving correct insights. An open-source, standardized algorithm implementation for algorithm verification and empirical study is desperately needed.
To facilitate the consideration we mentioned above, we developed a bimanual dexterous manipulation environment: TrustDeHands, with a unified re-implementation of Safe RL algorithms. We highlight three particularly desirable features of TrustDeHands:
• For Safe RL researchers. We provide a series of complex and challenging safe dexterous manipulation tasks. The design of these tasks stems from the need for safety robot manipulation in our daily life (e.g., sweeping the floor without touching other furniture). In these environments, we have done exhaustive experiments with the implemented algorithms and contributed the results, our observations, and analysis for the reference of the community.
• For robotic researchers. We are the first collection of tasks focused on safe dexterous manipulations. In addition to safety research, we also provide a variety of features, including 1) multi-modal information as the policy input (e.g., contact force, RGB image, RGB-D image, point cloud...). 2) customizable dexterous hands and a robotic arm drive to the dexterous hand. These features provide a comprehensive platform for robotic research.
• Unified, highly-optimized, and extensible Safe RL algorithms. We re-implement widely used Safe RL algorithms, which support TrustDeHands and all popular environments in a single welldesigned algorithms framework. We have done maximum abstraction and encapsulation, deriving a similar model structure and update paradigm, thus enabling code reuse, ensuring a clean code style, and making it extremely extensible.
2 RELATED WORK
Safe RL Environments Simulator plays a critical role to the training for RL since it is very expensive to collect data in the real world. Safety-gym (Ray et al., 2019b) introduces a robot that has to navigate through a cluttered environment to achieve a task, which is a suite of complex continuous control environments for Safe RL. Safe-control-gym (Yuan et al., 2021) introduces cart-pole, 1D, and 2D quadrotor dynamic systems to achieve control tasks like stabilization or trajectory tracking, which allows us for constraint specification and disturbance injection onto a robot’s inputs, states, and inertial properties through a portable configuration system. AI Safety Gridworlds (Leike et al., 2017) proposes an environment for evaluating various safe properties of intelligent agents, including safe interruptibility, avoiding side effects, safe exploration, distributional shift, etc. MuJoCoVelocity, originally proposed in (Zhang et al., 2020a), consists of a series of safety tasks like constrainted velocity based on MuJoCo environment (Todorov et al., 2012). However, there still lacks a safe environment for safe robot manipulation, which the difficulty lies in requiring safe highdimensional continuous space control and dealing with the dynamic environment. So we introduce TrustDeHands, which aims to apply Safe RL to dexterous manipulation, providing a more challenging environment for evaluating Safe RL algorithms.
Safe RL Algorithms Since we formulate safe RL under CMDPs (Altman, 1999), in this section we mainly review algorithms w.r.t. CMDPs. For more discussions about safe RL algorithms, please re-
fer to the recent survey (Xu et al., 2022; Gu et al., 2022). With the rise of deep RL, CMDPs are also moving to more high-dimensional continuous control problems. CPO (Achiam et al., 2017b) proposes the general-purpose policy search algorithm for Safe RL with guarantees for near-constraint satisfaction at each iteration. PCPO (Yang et al., 2020a) utilizes a different two-step approach(i.e. first finds the policy with the maximum return, then projects this policy back into the safety region in terms of the minimum KL divergence.). FOCOPS (Zhang et al., 2020a) has adopted a similar idea by directly solving the constrained policy optimization problem via the primal-dual approach (Boyd et al., 2004) then projecting the solution back into the parametric policy space. Traditional robot control also considers the safety problem. Chow et al. (2018; 2019) presents a method via constructing Lyapunov function to guarantees the constraint satisfaction during training. Stooke et al. (2020a) combines PID control with Lagrangian methods which dampens cost oscillations resulting in reduced constraint violations. It is still lacking a unified and efficient framework to cover these algorithms. Therefore, we provide PyTorch-version re-implementations of widely used safe policy optimization algorithms, hoping to facilitate experimental validation in Safe RL research.
Dexterous Manipulation Manipulation is one of the essential research topics in robotics, researchers have long tried to establish a stable theory of manipulation (Billard & Kragic, 2019). However, traditional methods mostly rely on various assumptions, such as knowing the environmental dynamics model or having no uncertainty in the process. In recent years, learning-based approaches have been successful in this regard, coping with uncertainty in perception and even generalizing to unseen objects (Bohg et al., 2013). There are many learning-based benchmarks for robotic manipulation in recent years (Yu et al., 2020; James et al., 2020; Zhu et al., 2020), but none of them use dexterous hands or consider safe constraints. Dexterous multi-finger hands provide intrinsic dexterity for better manipulation in unstructured scenes and contact-rich situations, but additionally bring the challenges of high-dimensional control and complex contact models (Bircher et al., 2017; Rahman et al., 2016). Previous research methods have mostly focused on trajectory optimization or model prediction, which highly relied on accurate dynamics models (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). For example, Williams et al. (2015) performs in-hand manipulation of a cube using a trajectory optimization technique known as Model Predictive Path Integral (MPPI). Charlesworth & Montana (2021) extended the MPPI method to allow objects to be thrown and catch between two hands. OpenAI et al. (2019) solved a Rubik’s cube using model-free RL and domain randomization techniques. Chen et al. (2022a) proposed an in-hand manipulation system to learn how to manipulate a large number of objects of different shapes, and even generalize to unseen objects. Qin et al. (2022; 2021) studied dexterous manipulation learning from human demonstration. Chen et al. (2022c) studied the bimanual dexterous manipulation to solve cooperative manipulation and skill generalization problem. While most of them focus on unconstrained dexterous manipulation, how to do dexterous manipulation safely is an unstudied topic. In this paper, we provide a massively parallel benchmark for safe dexterous manipulation, hoping to facilitate research on how to manipulate safely.
3 THE SAFETY LEARNING ENVIRONMENT
TrustDeHands is consist of two parts: the safety learning environment and the safe policy optimization algorithms. In this section, we present the high-level design of a safety learning environment.
3.1 SYSTEM DESIGN AND DATASETS
TrustDeHands is a collection of challenging dexterous manipulation tasks, underpinned by Isaac Gym (Makoviychuk et al., 2021) and capable of high parallelism on the GPU. In TrustDeHands, all tasks require two dexterous hands to manipulate one or more objects. We design a series of tasks that require policies to perform safe dexterous manipulation, including throwing, grasping, jerking, pulling, etc. At the same time, each task provides the customizability of dexterous hands and objects to support a diverse task.
The construction of the dataset includes the configuration of dexterous hands and objects. The core goal of our dataset is to generate a wide variety of scenarios for learning constrained dexterous manipulation. We collected a variety of dexterous multi-finger hands as manipulators, including most of the dexterous hands currently used in robotics. In addition to manipulators, objects also play a crucial role in building datasets. Our manipulation objects are mainly from the YCB (Calli et al., 2017) and SAPIEN (Xiang et al., 2020) datasets. Both datasets contain many objects used in everyday life.
3.2 TASKS REPRESENTATION
TrustDeHands contains 10+ tasks focused on dexterous manipulation. Each task contains two dexterous hands and one or more manipulated objects, such as balls, blocks, etc., with the ultimate goal is to manipulate objects placed at the task-specified locations while to make the agent satisfy. The default dexterous hand used by our framework is the Shadow Hand (ShadowRobot, 2005), more details are provided in Appendix A. The agent performs each task according to its observation, action representation, and its reward and cost function definition. We provide more underlying technical details about the tasks in Appendix B.
Constrained Markov Decision Processes (CMDPs) Constrained Markov Decision Processes (CMDPs) is defined as (S,A,P, r, ρ0, γ, C), where S is the state space, A is the action space, P : S × A × S → [0, 1] is the transition probability function, r : S × A × S → R is the reward function, ρ0(·) ∈ P(S) is the initial state distribution (P(X) denotes the set of probability distributions over a set X), γ ∈ [0, 1) is the discount factor, and C = {(c, b)} is the constraint set, where c : S ×A× S → R is a cost function, and b is the cost threshold. We use π : S → P(A) to denote a stationary policy, and use Π to denote the set of all stationary policies. Let τ = {st, at, rt+1, ct+1}t≥0 ∼ π be a trajectory generated by π, where s0 ∼ ρ0(·), at ∼ π(·|st), st+1 ∼ P(·|st, at), rt+1 = r(st+1|st, at), and ct+1 = c(st+1|st, at). The state value function of π is defined as Vπ(s) = Eπ[ ∑∞ t=0 γ
trt+1|s0 = s]. The goal of reinforcement learning is to maximize the expected total reward, defined as J(π) = Es∼ρ0(·)[Vπ(s)].
We define the cost return function as Jc(π) = Es∼ρ0(·) [ ∑∞ t=0 γ tct+1|s0 = s], and the feasible policy set ΠC as ΠC = {π |π ∈ Π, Jc(π) ≤ b,∀(c, b) ∈ C }. The goal of Safe RL is to learn the optimal policy π⋆ such that
π⋆ = arg max π∈ΠC J(π). (1)
Observation Here we briefly describe the observation space of the tasks, more details can be seen in Appendix B.1. The observation of all tasks consisted of three parts: state information of the left and right Shadow Hand, and information about the task specification. In each task, the state information of the left and right Shadow Hand is the same, each Shadow Hand contains 24 minimum drive units (which contains four underdriven fingertip units) and its state consists of the following information:
• Dp,Dv,Df ∈ R24, corresponds to all joint DoF (Degree of Freedom) of angle, velocity, and force with drive units, respectively.
• Pw,Rw ∈ R3 represents the position and rotation of the base of the hand. • FT i = [FTpose, FTvl , FTva , FTf , FTt] ∈ R19, corresponds to the pose, linear velocity, angular
velocity, force magnitude, and torque of each fingertip, respectively.
• A ∈ R20/26, indicates the action executed by the hand in the previous step, which is consistent with the action space.
With above definitions, the state information of one Shadow Hand can be represented as Hand = {Dp,Dv,D,Pw,Rw, {FT i}5i=1,A}. We character the observation of each task by the following information:
{Handleft, Handright, Gtask}, (2) where Gtask represents some observation information specific to different tasks.
Action The dual Shadow Hands have more than 40 dimensions of action space, where each Shadow Hand has five fingers with a total of 24 degrees of freedom, the thumb has 5 joints and 5 degrees of freedom, and all other fingers have 3 degrees of freedom and 4 joints (where the joint at the end of each finger is uncontrollable). Therefore, the action space of each hand is 20 dimensions. If the base of the hand is not fixed, there are six dimensions to represent the translation and rotation of the hand base. For the lower and upper limit of the joint angle, see Table 2. In each step, we use the absolute value of each joint angle as the target, and use the PD controller to make it move.
Reward We designed some auxiliary rewards to help RL agents learn more consistently, and each task contains a task-specific bonus. In general, our reward design is goal-based and follows the same set of logic. For object-catching tasks, our reward is simply related to the difference between
the pose of the object and the target. For other tasks that require the hand to hold the object, our reward generally consists of three parts: the distance from the left hand to a left-hand grip point on the object, the distance from the right hand to a right-hand grip point on the object, and the distance from the object to the object’s target.
Cost Each task contains different constraints (e.g., the ball needs to be thrown to a specified height or a specified angle to prevent damage to other items; the robots need to clean the floor without hitting other furniture). The specific constraint design depends on the safety requirements of each task.
3.3 EVALUATION SUITE
TrustDeHands has a collection of 10+ different tasks. These tasks form an evaluation suite for benchmarking the performance of Safe RL algorithms. In this section, we describe six of these representative tasks (see Figure 1), and the remaining tasks are described in detail in Appendix B.
Safe Finger In this task, one hand needs to throw the ball to the other hand, and both hands except the same horizontal plane. We design two different constraints, including constraints on the minimum drive units, and constraints on the finger joints.
Hand Up Wall In this task, one hand needs to throw the ball at a certain height to the other hand. The constraint of this task is concerned with the magnitude of the force of the minimum drive unit.
Hand Over Wall In this task, one hand needs to throw the ball at a certain angle to the other hand. It is more concerned with the constraints in a certain behavioral paradigm and needs to take into account the collaboration of the whole hand-driven unit.
Pick Bottles There are five bottles in a tight row, and the dual hands need to pick up two of the bottles smoothly without touching the others.
Jenga The dual hands need to collaborate to extract the specified blocks from an unstable stacking structure and avoid breaking the rest of the blocks apart.
Clean House There is a broom, trash, and dustpan in this environment. We need to use both hands to manipulate the broom to sweep the trash into the dustpan. There will also be a chair as an obstacle on the way.
3.4 VISUAL INFORMATION
It is very difficult to obtain the state information of the robot in the real world. One way to solve this problem is to use the vision sensor as the input to train the policy. Therefore, we provide multiple modalities of visual information as input, including RGB, RGB-D, and point cloud, see Figure 2. It is generated using the camera in the Isaac Gym, and the pose and toward of the camera can be customized by the user to obtain the desired visual information. We also propose a point cloud parallel acceleration function to adapt Isaac Gym and provide an example of using it to train Hand Over task, see Appendix.
3.5 CUSTOMIZABLE DIVERSIFORM MANIPULATORS AND ADAPTATION CHALLENGE
There are more type of dexterous hands than the shadow hand like allegro hand, trifinger, et al, and supporting other dexterous hands helps to advance research and community development. Therefore, in addition to the Shadow Hand, we also provide five kinds of other dexterous multi-finger hands in TrustDeHands. In addition, using a robotic arm drive at the base of the dexterous hand not only matches the real-world setting but also an important step for sim to real transfer. Becasue it is very difficult to match the real dynamics of the flying hand, the TrustDeHands provide a way to reduce the reality gap by adjusting the dynamics and physics parameters of the arm, which simplifies the deployment process from simulation to the real world applications.
Moreover, we offer a variety of arms and a variety of dexterous hand combinations, which has many benefits. For example, researchers can choose the hand they want according to their own conditions, which brings wider applicability to our benchmark. At the same time, we can use different arms and different hands to study the adaptability and generalization ability of policies, which challenges multi-task learning and meta learning research in the future. An schematic of this feature is shown in Figure 3.
4 EXPERIMENTS
4.1 SAFE RL ALGORITHMS IMPLEMENTATION
Based on their original papers or public code base, we reimplement eight algorithms (CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020)), covering major safe policy optimization algorithms. A brief introduction to each algorithm is given in Appendix E.
We abstract similar structure of the safe policy optimization algorithms, and modularize the code into interaction with environments, parallel sample collection, buffer storage and computation, algorithm core update, and auxiliary functionalities such as visualization and logger. Maximum abstraction and encapsulation take place at the implementation of algorithms core, where each algorithm inherits directly from its
base algorithm, thus only unique features have to be implemented and all other code can be reused. An overview of the core of algorithms and logger is shown in Figure 4.
For algorithms implementation, it is critical to ensure its correctness and reliability. To achieve this goal, we examine the implementation of our algorithms carefully. To test the performance of our implementation, we run the eight algorithms on 30 tasks (for a complete list of the tasks, please refer to Appendix F.1) contained in the four environment suites and present our experimental results for the reference of the community in Appendix F.2.
4.2 EVALUATION PROTOCOL
Metrics We define the following metrics to depict the safety performance of an agent in different tasks. (1) the average return of trajectories, Jr(θ); (2) the average cumulative cost of trajectories, Jc(θ). In Safe RL domain, for any two agents, the superiority of the agents is determined by the following priority comparisons. On the one hand, the agent that satisfies the constraint will definitely outperform the unconstrained one. On the other hand, two agents that satisfy the constraint are determined by comparing the magnitude of their cumulative returns.
Algorithms For the UnSafe RL algorithm we uniquely use PPO (Schulman et al., 2017), where the reward function contains no information about the auxiliary costs. For Safe RL algorithms, we evaluate the performance of PPO-Lag (Ray et al., 2019b), FOCOPS, P3O, and PCPO algorithms on TrustDeHands, and the remaining Safe RL algorithms we implemented are in our anonymous Github repository.
4.3 RESULTS
We mainly conduct three experiments and analyze the results in this section: 1) The performance of PPO, PPO-Lag, FOCOPS, P3O, CPPO-PID, and algorithms on six representative tasks 2) The performance of eight Safe RL algorithms on the Safe Finger task 3) The performance of point cloud RL on the Hand Over Wall task.
For 1), We evaluate the performance of PPO, PPO-Lag, FOCOPS, CPPO-PID, and P3O algorithms on six tasks, and we implemented the rest of the Safe RL algorithms in our anonymous Github repository. The performance of each algorithm is shown in Figure 5. It can be observed that PPOLag can achieve high performance within the range allowed by the cost, and is the best performing algorithm here. Comparing the performance of PPO and PPO-Lag, it can be found that PPO-Lag can perform similarly to PPO in Jenga, and Safe Finger tasks, but the cost is constrained to a lower range, which indicates that the model has learned how to safely manipulate. A remarkable result is that in the Janga and Pick Bottle task, the performance of PPO-Lag is far superior to PPO. This
For 2), On the other hand, we tested the entire eight algorithms we implemented on the Safe Finger task, which is shown in Figure 1. We the specific detials can be found in Appendix C.
It can be seen that CPO has almost no work, and the PPO algorithm will cause a high cost. This may be due to various approximation methods in CPO, which puts an emergency on Safe RL towards complex manipulation areas.
5 POTENTIAL RESEARCH PROBLEMS TO STUDY USING TRUSTDEHANDS
TrustDeHands provides ample opportunities to study trustworthy manipulation of dexterous hands based on Safe RL. We found that the primal-dual based approach (Boyd et al., 2004) result in great volatility in the update of lagrange multipliers. A potential research direction is to consider combining feedback control methods in control systems, such as PID (Ziegler et al., 1942; Ang et al., 2005), ADRC (Han, 2009), etc., to mitigate the instability and volatility of Lagrange multipliers in the learning process. Therefore, it would be interesting to combine control methods of complex systems with Safe RL methods to solve complex manipulation problems of dexterous hands.
Sim to real is an important research direction about transferring the simulation result to the real robot. Around this theme, our benchmark includes many of the robot arms and dexterous hands, which were accepted by many research labs. It is convenient for different researchers to choose their own arms and hands for training in the simulation. Meanwhile, the tasks in our benchmark, such as picking bottles1, Jenga2, etc., are meaningful in the real world but also needed to ensure safety if transferring the trained policy from simulation to the real world. So our benchmark can also be used to study how to perform sim to real more safely from the perspective of Safe RL.
Training policy with a state-based observation space is difficult for sim to real transfer because such inputs are not available in the real world. So it also makes sense to study the more readily available policy inputs in the real world, such as point clouds. Our environment supports a multimodal input such as visual and forces information, which can support research in this direction. We hope that our benchmark can serve as a tool to study the sim to real transfer of dexterous hands.
Finally, generalization is an important direction to explore, which is a potential strength of RL. TrustDeHands supports self-customization, enabling switching and linking different hands and arms to evaluate the generality of different algorithms. Users can use TrustDeHands as a platform for modification or secondary development to design richer and more challenging target tasks, and we hope that this work will contribute to the flourishing of the RL community.
6 CONCLUSION AND FUTURE WORK
In this work, we presented TrustDeHands, which is the first benchmark focused on safe dexterous manipulation. We standardize the safe policy optimization methods for solving CMDPs and introduce a unified, highly-optimized, extensible, and comprehensive algorithms re-implementation. We checked the correctness of our algorithms on the existing Safe RL benchmark and tested it on TrustDeHands. The results show that the Safe RL algorithm can better solve the safety problem in dexterous manipulation. For example, using Safe RL can grab the target bottle without touching other bottles, and avoid collision with obstacles when sweeping the floor. However, it is difficult for unconstrained RL algorithms to have this guarantee. These situations are very important for robots in real-world environments because RL-based methods tend to lead to unpredictable behaviors that are prone to danger and damage to robots.
Additionally, we support two features regarding visual policy input and various arms and dexterous hands. Some sort of visual input is becoming increasingly common in real-world RL-trained robots, and so benchmarks for this setting are important. Diverse arms and hands increase the applicability of our benchmarks and allow us to study policy generalization between different robots.
We believe that TrustDeHands can significantly accelerate the progress of future research on safe manipulation, facilitate the integration of reinforcement learning with robotic control, and will make a singular contribution to the reinforcement learning community.
B TASK SPECIFICATIONS
B.1 BASIC STATE SPACE AND ACTION SPACE
The state space dimension of each environment is up to 400 dimensions in total, and the action space dimension is up to 40 dimensions. All environments are goal-based, and each epoch will randomly reset the object’s starting pose and target pose to improve generalization. We only use the shadow hand and object state information as observation at present. The observation of all tasks is composed of three parts: the state information of the left and right hands, and the information of objects and
target. The state information of the left and right hands were the same for each task, including hand joint and finger positions, velocity, and force information. The state information of the object and goal are different for each task, which we will describe in the following. Table 4 shows the specific information of the left-hand and right-hand state.
B.2 SAFE FINGER
This environment contains two dexterous hands. At the beginning of each episode, a ball falls randomly around the right hand, and the two hands have to collaborate to place the ball to a given position. Since the target is out of the reach of the right hand, and the right hand cannot pass the ball to the left hand directly, a possible solution is that the right hand grabs the ball, throws it to the left hand; the left hand catches the ball, and puts it to the target. Note that the base of the hand is fixed.
Observations The 398-dimensional observational space for Hand Over task is shown in Table 5. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Safe Finger task is shown in Table 6.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (3) where α is a constant balances positional and rotational rewards.
Cost In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to figure 6 (b)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③, ④, and the cost is defined as:
ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (4)
B.3 HAND UP WALL
Similarly, this environment is similar to the Hand Over Wall, the difference is that the wall in this environment only retains the lower half, so the ball needs to be thrown high to prevent it from hitting the wall, requiring different motion skill.
Observations The 398-dimensional observational space for Hand Up Wall task is shown in Table 7. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Up Wall task is shown in Table 8.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (5) where α is a constant balances positional and rotational rewards.
Cost The ball is thrown from the right hand to the left hand, and the curve of the ball throwing out is not consistent, and we set a wall with height in the middle of the two hands. This requires more delicate hand manipulation, and when the right hand does not throw the ball with the proper force or angle, it will be difficult to throw the ball over the wall. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.4 HAND OVER WALL
This environment is similar to Safe Finger, except that it has a wall between each hand and a hole in the middle of the wall. We need to learn policy to keep the ball from hitting the wall during the toss.
Observations The 398-dimensional observational space for Hand Over Wall task is shown in Table 9. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Over Wall task is shown in Table 10.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (6) where α is a constant balances positional and rotational rewards.
Cost This constraint is more demanding than Wall Down, where we require the ball thrown to fit through a specified narrow hole. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.5 PICK BOTTLES
This environment contains two hands, a table and five bottles. The five bottles were placed in a row on the table horizontally with very little space between them. We need to pick up two bottles with two dexterous hands, and not touch the bottle around it to cause possible damage.
Observations The 400-dimensional observational space for Pick Bottles task is shown in Table 11. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Pick Bottles task is shown in Table 12.
Reward The reward consists of three parts: the distance from the left hand to the left bottle cap, the distance from the right hand to the right bottle cap, and the height of the two bottles that need to be picked. The height of the two bottles that need to be picked is given by dheight. The position difference between the left hand to the left bottle cap dleft is given by dleft = ∥xlhand − xlbcap∥2.The position difference between the right hand to the right bottle cap dright is given by dright = ∥xrhand − xrbcap∥2. The reward is given by this specific formula:
r = dheight ∗ 20− dleft − dright (7)
Cost The constraint of this environment is that we can’t touch other bottles when we pick the bottle. When our hand, or the bottle we picked, touches other bottles, the cost is set to 1, otherwise it is 0.
B.6 JENGA
Jenga is a fitness game which is very suitable for Safe RL algorithm evaluation. Players take turns removing one block at a time from a tower made up of many blocks. In this environment, we need to remove the one we want from the 16 blocks without knocking over the others.
Jenga The 411-dimensional observational space for Jenga task is shown in Table 13. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Jenga task is shown in Table 14.
Reward For timestep t, let xb,t as the position of the left middle finger, xg,t as the position of the left end of the object, and dp,t = ∥xb,t − xg,t∥2. Define dy,t as the y-axis direction of the position of the object center, the reward is defined as follows:
rt = 30 ∗ (dy,t + 0.6)− dp,t (8)
Cost The constraint of this environment is that we can not touch other blocks in the Jenda. The cost is 1 if all blocks move more than 0.01 cm, and 0 otherwise.
B.7 CLEAN HOUSE
This environment is in a scene we usually clean at home. We need to control the broom with both hands to sweep the trash from the ground into the dustpan without touching other furniture (e.g. chairs).
Observations The 431-dimensional observational space for Clean House task is shown in Table 15. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Clean House task is shown in Table 16.
Reward The reward consists of four parts: the distance from the left hand to the left handle position of the broom, the distance from the right hand to the right handle position of the broom, the object (trash) position to the bottom of broom position, and the distance from the object to the target (dustpan) point. The distance from the object to the target point is given by dtarget. The position difference from the left hand to the left handle position of the broom is given by dleft. The position difference from the right hand to the right handle position of the broom is given by dright. The object position to the bottom of broom position is given by dbottom. The reward is given by this specific formula:
r = 50− dtarget ∗ 10− 5 ∗ dleft − 5 ∗ dright (9)
Cost The constraint of this environment is that we can not damage other furniture when we sweep the floor. So there is a chair in the path of the trash and the dustpan. The cost is 1 when the broom touches the chair and make it move, and 0 otherwise.
C FOUR ENVIRONMENTS IN SAFE FINGER.
All environments are comes from Safe Finger. The difference between Safe Finger and Safe Joint is whether it is a joint constrainedor a finger constrained, which is described as follows:
Safety Joint. In these tasks, we constrain the freedom of joint ④ of forefinger (please refer to Figure 6 (a) and (f)). Without the constraint, joint ④ has freedom of [−20◦, 20◦]. The safety tasks restrict joint ④ within [−10◦, 10◦]. Let ang 4 be the angle of joint ④, and the cost is defined as:
ct = I(ang 4 ̸∈ [−10◦, 10◦]). (10)
Safety Finger. In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to Figure 6 (b) and (f)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③,
④, and the cost is defined as: ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (11)
Hand over stands for the situation that two Shadow Hands with palms facing up, opposite each other, and an object that needs to be passed in Safe Finger, and it stands the object that needs to be thrown from the vertical hand to the palm-up hand in Safe Finger. Specific information can refer to Appendix B.2.
D POINT CLOUD
We replace the object state information with point clouds in the case of 128 parallel environments. The point cloud is captured by the depth camera and downsampled to 2048 points. The features are extracted using PointNet (Qi et al., 2017) to a 128-dimensional vector and concate with other observations. It can be seen that under the same episode and the same number of environments, the performance of point cloud input is not as good as full state input, but it can also achieve some performance. But also using an RTX 3090 GPU, the point cloud RL has only 200+ fps, and the full state can reach 30000+. In fact, we can only open up to 128 environments when using point clouds. This was a problem with Isaac Gym’s poor parallel support for cameras. We further refined the method to enhance the parallelization of the point cloud extraction in order to close this gap. When compared to Isaac Gym’s original code, the speedup is 1.46 times, going from 232 fps to 339 fps.
E DETAILS OF BENCHMARK ALGORITHMS
In this section, we review the key steps of typical Safe RL algorithms implemented in this benchmark, which include CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020). We implemented all of these algorithms and check the correctness but only evaluated some of them in TrustDeHands. Firstly, we will give a brief introduction to these algorithms below and give the hyperparameters of the algorithms we used in our evaluation. Then we have verified our re-implementations in other Safe RL benchmarks.
E.1 CPO (ACHIAM ET AL., 2017A)
For a given policy πθk , CPO updates new policy πθk+1 as follows:
πθk+1 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] (12)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b, (13)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ. (14)
It is impractical to solve the problem (12) directly due to the computational cost. Achiam et al. (2017a) suggest to find some convex approximations to replace the term Aπθk (s, a) and D̄KL(πθ, πθk) Eq.(12)-(14). Concretely, Achiam et al. (2017a) suggest to use first-order Taylor expansion of J(πθ) to replace the objective (12) as follows,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Aπθk (s, a) ] = J(πθ)− J(πθk) ≈ (θ − θk)⊤∇θJ(πθ).
Similarly, Achiam et al. (2017a) use the following approximations to turn the constrained policy optimization (12)-(14) to be a convex problem,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Acπθk (s, a) ] ≈ (θ − θk)⊤∇θJc(πθ), (15)
D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (16) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ,
Eq.(16) is the second-oder approximation of (14).
Let λ⋆, ν⋆ is the dual solution of the following problem
λ⋆, ν⋆ = arg max λ≥0,ν≥0 { −1 2λ ( g⊤H−1g − 2νr + sv2 ) + νc− λδ 2 } ;
where g = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] , a = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] , r = g⊤Ha, s = a⊤H−1a, and c = Jc(πθk)− b. Finally, CPO updates parameters according to conjugate gradient as follows: if approximation to CPO is feasible, then
θk+1 = θk + 1
λ⋆ H−1(g − ν⋆a),
else,
θk+1 = θk − √
2δ
a⊤H−1a H−1a.
E.2 PCPO (YANG ET AL., 2020B)
Projection-Based Constrained Policy Optimization (PCPO) is an iterative method for optimizing policies in a two-step process: the first step performs a local reward improvement update, while the second step reconciles any constraint violation by projecting the policy back onto the constraint set.
Reward Improvement.
πθ k+1
2 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] ,
s.t.D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ;
Projection.
πθk+1 = arg min πθ∈Πθ
D ( πθ, πθ
k+1 2
) ,
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b.
Then, Yang et al. (2020b) follows CPO (Achiam et al., 2017a) uses convex approximation to original problem, and calculate the update rule as follows,
θk+1 = θk −
√ 2δ
g⊤H−1g H−1g −max 0, √
2δ
g⊤H−1g a⊤H−1g + c
a⊤L−1a L−1a, where L = I if D is ℓ2-norm, and L = H if D is KL-divergence.
E.3 FOCOPS (ZHANG ET AL., 2020B)
Zhang et al. (2020b) propose the First Order Constrained Optimization in Policy Space (FOCOPS) that is a two-step approach. We present it as follows.
Step1: Finding the optimal update policy.
Firstly, for a given policy πθk, FOCOPS finds an optimal update policy π⋆ by solving the optimization problem (12)-(14) in the non-parameterized policy space.
π⋆ = argmax π∈Π Es∼dρ0πθk (·),a∼π(·|s) [ Aπθk (s, a) ] (17)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼π(·|s) [ Acπθk (s, a) ] ≤ b, (18)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(π, πθk)[s]] ≤ δ. (19)
If πθk is feasible, then the optimal policy for (17)-(19) takes the following form:
π⋆(a|s) = πθk(a|s) Zλ,ν(s) exp
( 1
λ
( Aπθk (s, a)− νA c πθk (s, a) )) , (20)
where Zλ,ν(s) is the partition function which ensures (20) is a valid probability distribution, λ and ν are solutions to the optimization problem:
min λ,ν≥0 λν + νb̃+ λEs∼dρ0πθk (·),a∼π⋆(·|s) [Zλ,ν(s)] ,
the term b̃ = (1− γ)(b− Jc(πθk)). Step 2: Projection.
Then, FOCOPS projects the policy found in the previous step back into the parameterized policy space Πθ by solving for the closest policy πθ ∈ Πθ to π⋆ in order to obtain πθk+1 :
πθk+1 = arg min πθ∈Πθ Es∼dρ0πθk (·) [KL(πθ, π ⋆)[s]].
We usually apply stochastic gradient decent to obtain the solution of above θk+1.
E.4 PPO-LAG
The Lagrangian approach is a standard way to solve CMDP (1), which is also known as primal-dual policy optimization:
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (21)
TRPO-Lag and PPO-Lag combine the Lagrangian approach with TRPO and PPO. Concretely, PPO using the following clip term to replace J(π) in (21),
Lrclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] , where πk is short for πθk . With Aπk(s, a) replacing A c πk (s, a) respectively, and obtain Lcclip as follows,
Lcclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) }] . Then, PPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ
{ Lrclip(πθ)− λ ( Lcclip(πθ)− b )} . (22)
All of the above terms can be estimated according to the policy πk. Then PPO-Lag updates the policy according to first-order optimizer as follows
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (23)
λk+1 = λk + η ( Lcclip(πθ)− b ) + ∣∣ θ=θk
, (24) where η > 0 is step-size.
E.5 TRPO-LAG
TRPO-Lag shares a similar idea but it is adaptive to TRPO, where TRPO-Lag replaces J(πθ) as follows, J(πθ) ≈ J(πθk) + (θ − θk)⊤∇θJ(πθ) =: Lr(πθ). (25) Similarly, Jc(πθ) ≈ Jc(πθk) + (θ − θk)⊤∇θJc(πθ) =: Lc(πθ), (26) and D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (27) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] .
Then, TRPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ {Lr(πθ)− λ (Lc(πθ)− b)} ∣∣ θ=θk,λ=λk , (28)
where the policy parameter θ satisfies the following condition (θ − θk)⊤H(θ − θk) ≤ δ.
E.6 P3O (ZHANG ET AL., 2022)
P3O solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem as follows,
πk+1 = arg min π∈Πθ { Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Aπk(s, a) ] + κB(π, b) } , (29)
where κ is a positive scalar, and the penalty term B(π, b) is defined as follows,
B(π, b) = max { 0,Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Acπk(s, a) ] + (1− γ) (Jc(πk)− b) } . (30) P3O utilizes a simple yet effective penalty approach to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective.
For the practical implementation, P3O consider the following optimization objective: LP3O(θ) = LrP3O(θ) + κmax {0,LcP3O(θ)} , (31) where
LrP3O(θ) = E [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] ,
LcP3O(θ) = E [ max { πθ(a|s) πk(a|s) Acπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) } + (1− γ) (Jc(πk)− b) ] , the notation E[·] is short for Es∼dρ0πk (·),a∼πk(·|s)[·]. All of the term in (31) can be estimated according to the samples selected by πk.
For each round, P3O chooses the parameter adaptively according to the following rule: κ← min {ρκ, κmax} , (32) where ρ > 1 and κmax is a positive scalar.
Finally, P3O updates the policy parameter as follows,
θ ← θ − η ∂ ∂θ LP3O(θ). (33)
E.7 IPO (LIU ET AL., 2020)
IPO considers the objective with logarithmic barrier functions (Nemirovski, 2004) to learn the safe policy. Concretely, IPO considers the following way to update policy,
πk+1 = arg max π∈Πθ
{ Lrclip(π) + ϕ(π) } , (34)
where the clip objective is Lrclip(π), and ϕ(π) is the logarithm barrier function with respect to the CMDP problem,
ϕ(π) = 1
m log
( b− Jc(π) ) , (35)
where m > 0 is a hyper-parameter that needs to be tuned.
E.8 CPPO-PID (STOOKE ET AL., 2020B)
CPPO-PID (Stooke et al., 2020b) also considers the primal-dual policy optimization method to solve the CMP problem,
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (36)
The main difference between CPPO-PID and previous Lagrangian methods is that replacing the update rule (??), CPPO-PID considers PID control technique (Johnson & Moradi, 2005) to update Lagrange multiplier.
First, CPPO-PID updates the parameter θ as follows,
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (37)
where η > 0 is step-size.
Then, CPPO-PID updates the parameter λ according to PID method (Algorithm 2 in (Stooke et al., 2020b)),
λk+1 = PID(KP ,KI ,KD, πθk , n), (38) where KP ,KI ,KD are the parameter needed to be tuned, and n is the iteration number.
F CORRECTNESS VERIFICATION
F.1 TASK LIST
Below are four safety environments that provide interesting tasks for Safe RL. Among them, three are popular safety environments, MuJoCo-Velocity (Todorov et al., 2012), Safety-Gym (Ray et al., 2019a), Bullet-Safety-Gym (Gronauer, 2022). We test the 8 algorithms on 26 tasks in these safety environments to verified our re-implementations on TrustDeHands. The complete list of these tasks is shown in table 17.
F.2 TASK PERFORMANCE
we show the performance of our re-implementations on the tasks in 18, 19, 20, 21, 22, 23. | 1. What is the focus and contribution of the paper regarding safe RL algorithms?
2. What are the strengths of the proposed benchmark, particularly in terms of its novelty and practical relevance?
3. What are the weaknesses of the paper, especially regarding the evaluated methods and task selection?
4. Do you have any concerns or suggestions regarding the future directions for the proposed benchmark?
5. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a benchmark for evaluating safe RL algorithms on dexterous hand manipulation tasks. The authors designed 6 tasks. Each has specific constraints. They based the benchmark on Issac Gym, which enables large parallel training. The environment supports multi-modal visual inputs and various robot hands and arms. The authors also implemented several previous constrained RL algorithms and compared them to the proposed benchmark, demonstrating challenges and potential research opportunities in safe manipulation.
Strengths And Weaknesses
Strength:
The paper studied safe dexterous manipulation problems, which is very novel. The emphasis on safety is of practical relevance. It brings a good platform to study problems in constrained MDP, which significantly contributes to the community.
The environment is based on Issac Gym, which supports large parallel environments for RL training. The additional supports for visual RL are also appealing.
I appreciate the authors’ efforts in implementing and evaluating previous approaches. The experiment results are informative, demonstrating the challenges in this field.
Weakness:
Currently, the evaluated methods are all on-policy RL methods. I suspected that off-policy algorithms such as SAC might be more data-efficient, especially in environments with high-dimensional action spaces. I think it is worth also evaluating off-policy methods or model-based methods in these environments.
I find the benchmark does not include classical tasks like in-hand object reorientation, which better connects to existing RL research and provides more dexterity. Currently, I see few necessities for using dexterous hands instead of robot grippers in the proposed tasks.
Although the platform supports visual inputs, the paper does not evaluate it yet. As mentioned in the appendix, the speed of Issac Gym will be limited when it needs to render images.
I am very curious about the performance of PPO when the environment terminates early when the constraint is violated. In this case, is it possible for PPO to learn to avoid constrain violations?
Clarity, Quality, Novelty And Reproducibility
The safe dexterous hand manipulation is novel and technique details are reasonable. |
ICLR | Title
A Massively Parallel Benchmark for Safe Dexterous Manipulation
Abstract
Safe Reinforcement Learning (Safe RL) aims to maximize expected total rewards meanwhile avoiding violating safety constraints. Although a plethora of safetyconstrained environments have been developed to evaluate Safe RL methods, most of them focus on navigation tasks, which are rather simple and have non-trivial gap with real-world applications. For robotics studies, dexterous manipulation is becoming ubiquitous; however, the idea of safe dexterous manipulations are rarely studied in robotics applications. In this paper, we propose TrustDeHands, a massively parallel benchmark for Safe RL studies on safe dexterous manipulation tasks. TrustDeHands is built within the Isaac Gym, a GPU-level parallel simulator that enables highly efficient RL training process. To stay close to real world settings, TrustDeHands offers multi-modal visual inputs, including RGB, RGB-D and point cloud, and supports a variety of arms and dexterous hands from different brands. Moreover, TrustDeHands provides a solid implementation of eight popular safe policy optimization algorithms; this facilitates trustworthy validation for Safe RL methods outside navigation tasks. TrustDeHands include a myriad of challenging tasks that require safety awareness (e.g., Jegna). Results on these tasks show that Safe RL methods can achieve better performance than classical RL algorithms, indicating the effectiveness of Safe RL in safe robot manipulation tasks. To our best knowledge, TrustDeHands is the first benchmark targeting at safe dexterous manipulation. We expect this benchmark to consistently serve as a reliable evaluation suite for future Safe RL developments and further promote the integration between the lines of research of Safe RL and dexterous manipulation. The code and demonstration can be found at https://sites.google.com/view/trustdehands/.
1 INTRODUCTION
Reinforcement Learning (RL) is a powerful way to solve sequence decision problems and has achieved superhuman performance in games (Silver et al., 2016; 2017; Vinyals et al., 2019; OpenAI, 2018), robotic (Andrychowicz et al., 2020; Chen et al., 2022b), and financial (Hambly et al., 2021). Before the RL deploy in the real world, researchers are tasked with proving their trustworthiness to maximize the benefits of AI systems while minimizing their risks (Xu et al., 2022). The fundamental principle of RL is that an agent tries to maximize the cumulative returns by trial and error, but the agents may play dangerous or harmful behaviors during the learning process. Thus, it is important to consider safe exploration that is known as safe RL. Most existing RL simulators usually ignore safety learning issue, where failure is acceptable and even desirable to learn from bad outcomes. But in the real world, such exploration can produce undesired miseries.
In recent years, Robot manipulation (Billard & Kragic, 2019) is an important direction for the application of RL, which covers many reserach (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). Among them, dexterous multi-fingered manipulation is the most challenging task, which puts forward higher requirements for control (Bircher et al., 2017; Rahman et al., 2016). To deal with dynamic environment and policy generalization, OpenAI et al. (2019); Chen et al. (2022a) have studied dexterous manipulation tasks to achieve some significant results. Moreover, trustworthy of the manipulation (Xu et al., 2022) is an important issue needed to be considered. If a robot wants to manipulate in the real world, ensuring safe and trustworthy is the highest priority. But in previous researches, there is no benchmark for safe manipulation. So we propose TrustDeHands, which uses safe policy learning to learn dexterous manipulation, hoping to fill this research gap.
Additionally, many existing Safe RL methods is unavailable, which result in researchers suffer from incorrect implementations, unfair comparisons and misleading conclusions. Safe policy learning is critical in real-world RL applications, where dangerous decisions are undesirable. For example, a robot agent should avoid taking actions that irreversibly damage its hardware (Ray et al., 2019a; Dulac-Arnold et al., 2019). Due to its importance, the community has been actively researching safe policy learning (e.g.,(Alshiekh et al., 2018; Stooke et al., 2020b; Gu et al., 2021; Yuan et al., 2021; Gronauer, 2022; Yang et al., 2022; Liu et al., 2022)). However, most of the existing work mainly focuses on algorithm design. Among these works, either the authors did not publish the source code (e.g. P3O (Zhang et al., 2022)), or the algorithms were implemented using different frameworks(e.g. PCPO (Yang et al., 2020b) in Theano (Al-Rfou et al., 2016), CPPO-PID (Stooke et al., 2020b) in PyTorch), with divergent approaches (FOCOPS (Zhang et al., 2020b) does not parallelize sample collection while others do), and on separate tasks (FOCOPS is tested solely on MuJoCoVelocity (Todorov et al., 2012) and CPPO-PID solely on Safety-Gym (Ray et al., 2019a)). While there exists safety-starter-agents (Ray et al., 2019a) as a publicly available collection of algorithms, it was implemented using TensorFlow1, required old hardware and system, lack recent updates, and was no longer maintained. As a result, the Safe RL community has experienced serious difficulty in reproducing the experimental results, comparing algorithms fairly, and deriving correct insights. An open-source, standardized algorithm implementation for algorithm verification and empirical study is desperately needed.
To facilitate the consideration we mentioned above, we developed a bimanual dexterous manipulation environment: TrustDeHands, with a unified re-implementation of Safe RL algorithms. We highlight three particularly desirable features of TrustDeHands:
• For Safe RL researchers. We provide a series of complex and challenging safe dexterous manipulation tasks. The design of these tasks stems from the need for safety robot manipulation in our daily life (e.g., sweeping the floor without touching other furniture). In these environments, we have done exhaustive experiments with the implemented algorithms and contributed the results, our observations, and analysis for the reference of the community.
• For robotic researchers. We are the first collection of tasks focused on safe dexterous manipulations. In addition to safety research, we also provide a variety of features, including 1) multi-modal information as the policy input (e.g., contact force, RGB image, RGB-D image, point cloud...). 2) customizable dexterous hands and a robotic arm drive to the dexterous hand. These features provide a comprehensive platform for robotic research.
• Unified, highly-optimized, and extensible Safe RL algorithms. We re-implement widely used Safe RL algorithms, which support TrustDeHands and all popular environments in a single welldesigned algorithms framework. We have done maximum abstraction and encapsulation, deriving a similar model structure and update paradigm, thus enabling code reuse, ensuring a clean code style, and making it extremely extensible.
2 RELATED WORK
Safe RL Environments Simulator plays a critical role to the training for RL since it is very expensive to collect data in the real world. Safety-gym (Ray et al., 2019b) introduces a robot that has to navigate through a cluttered environment to achieve a task, which is a suite of complex continuous control environments for Safe RL. Safe-control-gym (Yuan et al., 2021) introduces cart-pole, 1D, and 2D quadrotor dynamic systems to achieve control tasks like stabilization or trajectory tracking, which allows us for constraint specification and disturbance injection onto a robot’s inputs, states, and inertial properties through a portable configuration system. AI Safety Gridworlds (Leike et al., 2017) proposes an environment for evaluating various safe properties of intelligent agents, including safe interruptibility, avoiding side effects, safe exploration, distributional shift, etc. MuJoCoVelocity, originally proposed in (Zhang et al., 2020a), consists of a series of safety tasks like constrainted velocity based on MuJoCo environment (Todorov et al., 2012). However, there still lacks a safe environment for safe robot manipulation, which the difficulty lies in requiring safe highdimensional continuous space control and dealing with the dynamic environment. So we introduce TrustDeHands, which aims to apply Safe RL to dexterous manipulation, providing a more challenging environment for evaluating Safe RL algorithms.
Safe RL Algorithms Since we formulate safe RL under CMDPs (Altman, 1999), in this section we mainly review algorithms w.r.t. CMDPs. For more discussions about safe RL algorithms, please re-
fer to the recent survey (Xu et al., 2022; Gu et al., 2022). With the rise of deep RL, CMDPs are also moving to more high-dimensional continuous control problems. CPO (Achiam et al., 2017b) proposes the general-purpose policy search algorithm for Safe RL with guarantees for near-constraint satisfaction at each iteration. PCPO (Yang et al., 2020a) utilizes a different two-step approach(i.e. first finds the policy with the maximum return, then projects this policy back into the safety region in terms of the minimum KL divergence.). FOCOPS (Zhang et al., 2020a) has adopted a similar idea by directly solving the constrained policy optimization problem via the primal-dual approach (Boyd et al., 2004) then projecting the solution back into the parametric policy space. Traditional robot control also considers the safety problem. Chow et al. (2018; 2019) presents a method via constructing Lyapunov function to guarantees the constraint satisfaction during training. Stooke et al. (2020a) combines PID control with Lagrangian methods which dampens cost oscillations resulting in reduced constraint violations. It is still lacking a unified and efficient framework to cover these algorithms. Therefore, we provide PyTorch-version re-implementations of widely used safe policy optimization algorithms, hoping to facilitate experimental validation in Safe RL research.
Dexterous Manipulation Manipulation is one of the essential research topics in robotics, researchers have long tried to establish a stable theory of manipulation (Billard & Kragic, 2019). However, traditional methods mostly rely on various assumptions, such as knowing the environmental dynamics model or having no uncertainty in the process. In recent years, learning-based approaches have been successful in this regard, coping with uncertainty in perception and even generalizing to unseen objects (Bohg et al., 2013). There are many learning-based benchmarks for robotic manipulation in recent years (Yu et al., 2020; James et al., 2020; Zhu et al., 2020), but none of them use dexterous hands or consider safe constraints. Dexterous multi-finger hands provide intrinsic dexterity for better manipulation in unstructured scenes and contact-rich situations, but additionally bring the challenges of high-dimensional control and complex contact models (Bircher et al., 2017; Rahman et al., 2016). Previous research methods have mostly focused on trajectory optimization or model prediction, which highly relied on accurate dynamics models (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). For example, Williams et al. (2015) performs in-hand manipulation of a cube using a trajectory optimization technique known as Model Predictive Path Integral (MPPI). Charlesworth & Montana (2021) extended the MPPI method to allow objects to be thrown and catch between two hands. OpenAI et al. (2019) solved a Rubik’s cube using model-free RL and domain randomization techniques. Chen et al. (2022a) proposed an in-hand manipulation system to learn how to manipulate a large number of objects of different shapes, and even generalize to unseen objects. Qin et al. (2022; 2021) studied dexterous manipulation learning from human demonstration. Chen et al. (2022c) studied the bimanual dexterous manipulation to solve cooperative manipulation and skill generalization problem. While most of them focus on unconstrained dexterous manipulation, how to do dexterous manipulation safely is an unstudied topic. In this paper, we provide a massively parallel benchmark for safe dexterous manipulation, hoping to facilitate research on how to manipulate safely.
3 THE SAFETY LEARNING ENVIRONMENT
TrustDeHands is consist of two parts: the safety learning environment and the safe policy optimization algorithms. In this section, we present the high-level design of a safety learning environment.
3.1 SYSTEM DESIGN AND DATASETS
TrustDeHands is a collection of challenging dexterous manipulation tasks, underpinned by Isaac Gym (Makoviychuk et al., 2021) and capable of high parallelism on the GPU. In TrustDeHands, all tasks require two dexterous hands to manipulate one or more objects. We design a series of tasks that require policies to perform safe dexterous manipulation, including throwing, grasping, jerking, pulling, etc. At the same time, each task provides the customizability of dexterous hands and objects to support a diverse task.
The construction of the dataset includes the configuration of dexterous hands and objects. The core goal of our dataset is to generate a wide variety of scenarios for learning constrained dexterous manipulation. We collected a variety of dexterous multi-finger hands as manipulators, including most of the dexterous hands currently used in robotics. In addition to manipulators, objects also play a crucial role in building datasets. Our manipulation objects are mainly from the YCB (Calli et al., 2017) and SAPIEN (Xiang et al., 2020) datasets. Both datasets contain many objects used in everyday life.
3.2 TASKS REPRESENTATION
TrustDeHands contains 10+ tasks focused on dexterous manipulation. Each task contains two dexterous hands and one or more manipulated objects, such as balls, blocks, etc., with the ultimate goal is to manipulate objects placed at the task-specified locations while to make the agent satisfy. The default dexterous hand used by our framework is the Shadow Hand (ShadowRobot, 2005), more details are provided in Appendix A. The agent performs each task according to its observation, action representation, and its reward and cost function definition. We provide more underlying technical details about the tasks in Appendix B.
Constrained Markov Decision Processes (CMDPs) Constrained Markov Decision Processes (CMDPs) is defined as (S,A,P, r, ρ0, γ, C), where S is the state space, A is the action space, P : S × A × S → [0, 1] is the transition probability function, r : S × A × S → R is the reward function, ρ0(·) ∈ P(S) is the initial state distribution (P(X) denotes the set of probability distributions over a set X), γ ∈ [0, 1) is the discount factor, and C = {(c, b)} is the constraint set, where c : S ×A× S → R is a cost function, and b is the cost threshold. We use π : S → P(A) to denote a stationary policy, and use Π to denote the set of all stationary policies. Let τ = {st, at, rt+1, ct+1}t≥0 ∼ π be a trajectory generated by π, where s0 ∼ ρ0(·), at ∼ π(·|st), st+1 ∼ P(·|st, at), rt+1 = r(st+1|st, at), and ct+1 = c(st+1|st, at). The state value function of π is defined as Vπ(s) = Eπ[ ∑∞ t=0 γ
trt+1|s0 = s]. The goal of reinforcement learning is to maximize the expected total reward, defined as J(π) = Es∼ρ0(·)[Vπ(s)].
We define the cost return function as Jc(π) = Es∼ρ0(·) [ ∑∞ t=0 γ tct+1|s0 = s], and the feasible policy set ΠC as ΠC = {π |π ∈ Π, Jc(π) ≤ b,∀(c, b) ∈ C }. The goal of Safe RL is to learn the optimal policy π⋆ such that
π⋆ = arg max π∈ΠC J(π). (1)
Observation Here we briefly describe the observation space of the tasks, more details can be seen in Appendix B.1. The observation of all tasks consisted of three parts: state information of the left and right Shadow Hand, and information about the task specification. In each task, the state information of the left and right Shadow Hand is the same, each Shadow Hand contains 24 minimum drive units (which contains four underdriven fingertip units) and its state consists of the following information:
• Dp,Dv,Df ∈ R24, corresponds to all joint DoF (Degree of Freedom) of angle, velocity, and force with drive units, respectively.
• Pw,Rw ∈ R3 represents the position and rotation of the base of the hand. • FT i = [FTpose, FTvl , FTva , FTf , FTt] ∈ R19, corresponds to the pose, linear velocity, angular
velocity, force magnitude, and torque of each fingertip, respectively.
• A ∈ R20/26, indicates the action executed by the hand in the previous step, which is consistent with the action space.
With above definitions, the state information of one Shadow Hand can be represented as Hand = {Dp,Dv,D,Pw,Rw, {FT i}5i=1,A}. We character the observation of each task by the following information:
{Handleft, Handright, Gtask}, (2) where Gtask represents some observation information specific to different tasks.
Action The dual Shadow Hands have more than 40 dimensions of action space, where each Shadow Hand has five fingers with a total of 24 degrees of freedom, the thumb has 5 joints and 5 degrees of freedom, and all other fingers have 3 degrees of freedom and 4 joints (where the joint at the end of each finger is uncontrollable). Therefore, the action space of each hand is 20 dimensions. If the base of the hand is not fixed, there are six dimensions to represent the translation and rotation of the hand base. For the lower and upper limit of the joint angle, see Table 2. In each step, we use the absolute value of each joint angle as the target, and use the PD controller to make it move.
Reward We designed some auxiliary rewards to help RL agents learn more consistently, and each task contains a task-specific bonus. In general, our reward design is goal-based and follows the same set of logic. For object-catching tasks, our reward is simply related to the difference between
the pose of the object and the target. For other tasks that require the hand to hold the object, our reward generally consists of three parts: the distance from the left hand to a left-hand grip point on the object, the distance from the right hand to a right-hand grip point on the object, and the distance from the object to the object’s target.
Cost Each task contains different constraints (e.g., the ball needs to be thrown to a specified height or a specified angle to prevent damage to other items; the robots need to clean the floor without hitting other furniture). The specific constraint design depends on the safety requirements of each task.
3.3 EVALUATION SUITE
TrustDeHands has a collection of 10+ different tasks. These tasks form an evaluation suite for benchmarking the performance of Safe RL algorithms. In this section, we describe six of these representative tasks (see Figure 1), and the remaining tasks are described in detail in Appendix B.
Safe Finger In this task, one hand needs to throw the ball to the other hand, and both hands except the same horizontal plane. We design two different constraints, including constraints on the minimum drive units, and constraints on the finger joints.
Hand Up Wall In this task, one hand needs to throw the ball at a certain height to the other hand. The constraint of this task is concerned with the magnitude of the force of the minimum drive unit.
Hand Over Wall In this task, one hand needs to throw the ball at a certain angle to the other hand. It is more concerned with the constraints in a certain behavioral paradigm and needs to take into account the collaboration of the whole hand-driven unit.
Pick Bottles There are five bottles in a tight row, and the dual hands need to pick up two of the bottles smoothly without touching the others.
Jenga The dual hands need to collaborate to extract the specified blocks from an unstable stacking structure and avoid breaking the rest of the blocks apart.
Clean House There is a broom, trash, and dustpan in this environment. We need to use both hands to manipulate the broom to sweep the trash into the dustpan. There will also be a chair as an obstacle on the way.
3.4 VISUAL INFORMATION
It is very difficult to obtain the state information of the robot in the real world. One way to solve this problem is to use the vision sensor as the input to train the policy. Therefore, we provide multiple modalities of visual information as input, including RGB, RGB-D, and point cloud, see Figure 2. It is generated using the camera in the Isaac Gym, and the pose and toward of the camera can be customized by the user to obtain the desired visual information. We also propose a point cloud parallel acceleration function to adapt Isaac Gym and provide an example of using it to train Hand Over task, see Appendix.
3.5 CUSTOMIZABLE DIVERSIFORM MANIPULATORS AND ADAPTATION CHALLENGE
There are more type of dexterous hands than the shadow hand like allegro hand, trifinger, et al, and supporting other dexterous hands helps to advance research and community development. Therefore, in addition to the Shadow Hand, we also provide five kinds of other dexterous multi-finger hands in TrustDeHands. In addition, using a robotic arm drive at the base of the dexterous hand not only matches the real-world setting but also an important step for sim to real transfer. Becasue it is very difficult to match the real dynamics of the flying hand, the TrustDeHands provide a way to reduce the reality gap by adjusting the dynamics and physics parameters of the arm, which simplifies the deployment process from simulation to the real world applications.
Moreover, we offer a variety of arms and a variety of dexterous hand combinations, which has many benefits. For example, researchers can choose the hand they want according to their own conditions, which brings wider applicability to our benchmark. At the same time, we can use different arms and different hands to study the adaptability and generalization ability of policies, which challenges multi-task learning and meta learning research in the future. An schematic of this feature is shown in Figure 3.
4 EXPERIMENTS
4.1 SAFE RL ALGORITHMS IMPLEMENTATION
Based on their original papers or public code base, we reimplement eight algorithms (CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020)), covering major safe policy optimization algorithms. A brief introduction to each algorithm is given in Appendix E.
We abstract similar structure of the safe policy optimization algorithms, and modularize the code into interaction with environments, parallel sample collection, buffer storage and computation, algorithm core update, and auxiliary functionalities such as visualization and logger. Maximum abstraction and encapsulation take place at the implementation of algorithms core, where each algorithm inherits directly from its
base algorithm, thus only unique features have to be implemented and all other code can be reused. An overview of the core of algorithms and logger is shown in Figure 4.
For algorithms implementation, it is critical to ensure its correctness and reliability. To achieve this goal, we examine the implementation of our algorithms carefully. To test the performance of our implementation, we run the eight algorithms on 30 tasks (for a complete list of the tasks, please refer to Appendix F.1) contained in the four environment suites and present our experimental results for the reference of the community in Appendix F.2.
4.2 EVALUATION PROTOCOL
Metrics We define the following metrics to depict the safety performance of an agent in different tasks. (1) the average return of trajectories, Jr(θ); (2) the average cumulative cost of trajectories, Jc(θ). In Safe RL domain, for any two agents, the superiority of the agents is determined by the following priority comparisons. On the one hand, the agent that satisfies the constraint will definitely outperform the unconstrained one. On the other hand, two agents that satisfy the constraint are determined by comparing the magnitude of their cumulative returns.
Algorithms For the UnSafe RL algorithm we uniquely use PPO (Schulman et al., 2017), where the reward function contains no information about the auxiliary costs. For Safe RL algorithms, we evaluate the performance of PPO-Lag (Ray et al., 2019b), FOCOPS, P3O, and PCPO algorithms on TrustDeHands, and the remaining Safe RL algorithms we implemented are in our anonymous Github repository.
4.3 RESULTS
We mainly conduct three experiments and analyze the results in this section: 1) The performance of PPO, PPO-Lag, FOCOPS, P3O, CPPO-PID, and algorithms on six representative tasks 2) The performance of eight Safe RL algorithms on the Safe Finger task 3) The performance of point cloud RL on the Hand Over Wall task.
For 1), We evaluate the performance of PPO, PPO-Lag, FOCOPS, CPPO-PID, and P3O algorithms on six tasks, and we implemented the rest of the Safe RL algorithms in our anonymous Github repository. The performance of each algorithm is shown in Figure 5. It can be observed that PPOLag can achieve high performance within the range allowed by the cost, and is the best performing algorithm here. Comparing the performance of PPO and PPO-Lag, it can be found that PPO-Lag can perform similarly to PPO in Jenga, and Safe Finger tasks, but the cost is constrained to a lower range, which indicates that the model has learned how to safely manipulate. A remarkable result is that in the Janga and Pick Bottle task, the performance of PPO-Lag is far superior to PPO. This
For 2), On the other hand, we tested the entire eight algorithms we implemented on the Safe Finger task, which is shown in Figure 1. We the specific detials can be found in Appendix C.
It can be seen that CPO has almost no work, and the PPO algorithm will cause a high cost. This may be due to various approximation methods in CPO, which puts an emergency on Safe RL towards complex manipulation areas.
5 POTENTIAL RESEARCH PROBLEMS TO STUDY USING TRUSTDEHANDS
TrustDeHands provides ample opportunities to study trustworthy manipulation of dexterous hands based on Safe RL. We found that the primal-dual based approach (Boyd et al., 2004) result in great volatility in the update of lagrange multipliers. A potential research direction is to consider combining feedback control methods in control systems, such as PID (Ziegler et al., 1942; Ang et al., 2005), ADRC (Han, 2009), etc., to mitigate the instability and volatility of Lagrange multipliers in the learning process. Therefore, it would be interesting to combine control methods of complex systems with Safe RL methods to solve complex manipulation problems of dexterous hands.
Sim to real is an important research direction about transferring the simulation result to the real robot. Around this theme, our benchmark includes many of the robot arms and dexterous hands, which were accepted by many research labs. It is convenient for different researchers to choose their own arms and hands for training in the simulation. Meanwhile, the tasks in our benchmark, such as picking bottles1, Jenga2, etc., are meaningful in the real world but also needed to ensure safety if transferring the trained policy from simulation to the real world. So our benchmark can also be used to study how to perform sim to real more safely from the perspective of Safe RL.
Training policy with a state-based observation space is difficult for sim to real transfer because such inputs are not available in the real world. So it also makes sense to study the more readily available policy inputs in the real world, such as point clouds. Our environment supports a multimodal input such as visual and forces information, which can support research in this direction. We hope that our benchmark can serve as a tool to study the sim to real transfer of dexterous hands.
Finally, generalization is an important direction to explore, which is a potential strength of RL. TrustDeHands supports self-customization, enabling switching and linking different hands and arms to evaluate the generality of different algorithms. Users can use TrustDeHands as a platform for modification or secondary development to design richer and more challenging target tasks, and we hope that this work will contribute to the flourishing of the RL community.
6 CONCLUSION AND FUTURE WORK
In this work, we presented TrustDeHands, which is the first benchmark focused on safe dexterous manipulation. We standardize the safe policy optimization methods for solving CMDPs and introduce a unified, highly-optimized, extensible, and comprehensive algorithms re-implementation. We checked the correctness of our algorithms on the existing Safe RL benchmark and tested it on TrustDeHands. The results show that the Safe RL algorithm can better solve the safety problem in dexterous manipulation. For example, using Safe RL can grab the target bottle without touching other bottles, and avoid collision with obstacles when sweeping the floor. However, it is difficult for unconstrained RL algorithms to have this guarantee. These situations are very important for robots in real-world environments because RL-based methods tend to lead to unpredictable behaviors that are prone to danger and damage to robots.
Additionally, we support two features regarding visual policy input and various arms and dexterous hands. Some sort of visual input is becoming increasingly common in real-world RL-trained robots, and so benchmarks for this setting are important. Diverse arms and hands increase the applicability of our benchmarks and allow us to study policy generalization between different robots.
We believe that TrustDeHands can significantly accelerate the progress of future research on safe manipulation, facilitate the integration of reinforcement learning with robotic control, and will make a singular contribution to the reinforcement learning community.
B TASK SPECIFICATIONS
B.1 BASIC STATE SPACE AND ACTION SPACE
The state space dimension of each environment is up to 400 dimensions in total, and the action space dimension is up to 40 dimensions. All environments are goal-based, and each epoch will randomly reset the object’s starting pose and target pose to improve generalization. We only use the shadow hand and object state information as observation at present. The observation of all tasks is composed of three parts: the state information of the left and right hands, and the information of objects and
target. The state information of the left and right hands were the same for each task, including hand joint and finger positions, velocity, and force information. The state information of the object and goal are different for each task, which we will describe in the following. Table 4 shows the specific information of the left-hand and right-hand state.
B.2 SAFE FINGER
This environment contains two dexterous hands. At the beginning of each episode, a ball falls randomly around the right hand, and the two hands have to collaborate to place the ball to a given position. Since the target is out of the reach of the right hand, and the right hand cannot pass the ball to the left hand directly, a possible solution is that the right hand grabs the ball, throws it to the left hand; the left hand catches the ball, and puts it to the target. Note that the base of the hand is fixed.
Observations The 398-dimensional observational space for Hand Over task is shown in Table 5. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Safe Finger task is shown in Table 6.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (3) where α is a constant balances positional and rotational rewards.
Cost In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to figure 6 (b)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③, ④, and the cost is defined as:
ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (4)
B.3 HAND UP WALL
Similarly, this environment is similar to the Hand Over Wall, the difference is that the wall in this environment only retains the lower half, so the ball needs to be thrown high to prevent it from hitting the wall, requiring different motion skill.
Observations The 398-dimensional observational space for Hand Up Wall task is shown in Table 7. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Up Wall task is shown in Table 8.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (5) where α is a constant balances positional and rotational rewards.
Cost The ball is thrown from the right hand to the left hand, and the curve of the ball throwing out is not consistent, and we set a wall with height in the middle of the two hands. This requires more delicate hand manipulation, and when the right hand does not throw the ball with the proper force or angle, it will be difficult to throw the ball over the wall. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.4 HAND OVER WALL
This environment is similar to Safe Finger, except that it has a wall between each hand and a hole in the middle of the wall. We need to learn policy to keep the ball from hitting the wall during the toss.
Observations The 398-dimensional observational space for Hand Over Wall task is shown in Table 9. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Over Wall task is shown in Table 10.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (6) where α is a constant balances positional and rotational rewards.
Cost This constraint is more demanding than Wall Down, where we require the ball thrown to fit through a specified narrow hole. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.5 PICK BOTTLES
This environment contains two hands, a table and five bottles. The five bottles were placed in a row on the table horizontally with very little space between them. We need to pick up two bottles with two dexterous hands, and not touch the bottle around it to cause possible damage.
Observations The 400-dimensional observational space for Pick Bottles task is shown in Table 11. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Pick Bottles task is shown in Table 12.
Reward The reward consists of three parts: the distance from the left hand to the left bottle cap, the distance from the right hand to the right bottle cap, and the height of the two bottles that need to be picked. The height of the two bottles that need to be picked is given by dheight. The position difference between the left hand to the left bottle cap dleft is given by dleft = ∥xlhand − xlbcap∥2.The position difference between the right hand to the right bottle cap dright is given by dright = ∥xrhand − xrbcap∥2. The reward is given by this specific formula:
r = dheight ∗ 20− dleft − dright (7)
Cost The constraint of this environment is that we can’t touch other bottles when we pick the bottle. When our hand, or the bottle we picked, touches other bottles, the cost is set to 1, otherwise it is 0.
B.6 JENGA
Jenga is a fitness game which is very suitable for Safe RL algorithm evaluation. Players take turns removing one block at a time from a tower made up of many blocks. In this environment, we need to remove the one we want from the 16 blocks without knocking over the others.
Jenga The 411-dimensional observational space for Jenga task is shown in Table 13. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Jenga task is shown in Table 14.
Reward For timestep t, let xb,t as the position of the left middle finger, xg,t as the position of the left end of the object, and dp,t = ∥xb,t − xg,t∥2. Define dy,t as the y-axis direction of the position of the object center, the reward is defined as follows:
rt = 30 ∗ (dy,t + 0.6)− dp,t (8)
Cost The constraint of this environment is that we can not touch other blocks in the Jenda. The cost is 1 if all blocks move more than 0.01 cm, and 0 otherwise.
B.7 CLEAN HOUSE
This environment is in a scene we usually clean at home. We need to control the broom with both hands to sweep the trash from the ground into the dustpan without touching other furniture (e.g. chairs).
Observations The 431-dimensional observational space for Clean House task is shown in Table 15. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Clean House task is shown in Table 16.
Reward The reward consists of four parts: the distance from the left hand to the left handle position of the broom, the distance from the right hand to the right handle position of the broom, the object (trash) position to the bottom of broom position, and the distance from the object to the target (dustpan) point. The distance from the object to the target point is given by dtarget. The position difference from the left hand to the left handle position of the broom is given by dleft. The position difference from the right hand to the right handle position of the broom is given by dright. The object position to the bottom of broom position is given by dbottom. The reward is given by this specific formula:
r = 50− dtarget ∗ 10− 5 ∗ dleft − 5 ∗ dright (9)
Cost The constraint of this environment is that we can not damage other furniture when we sweep the floor. So there is a chair in the path of the trash and the dustpan. The cost is 1 when the broom touches the chair and make it move, and 0 otherwise.
C FOUR ENVIRONMENTS IN SAFE FINGER.
All environments are comes from Safe Finger. The difference between Safe Finger and Safe Joint is whether it is a joint constrainedor a finger constrained, which is described as follows:
Safety Joint. In these tasks, we constrain the freedom of joint ④ of forefinger (please refer to Figure 6 (a) and (f)). Without the constraint, joint ④ has freedom of [−20◦, 20◦]. The safety tasks restrict joint ④ within [−10◦, 10◦]. Let ang 4 be the angle of joint ④, and the cost is defined as:
ct = I(ang 4 ̸∈ [−10◦, 10◦]). (10)
Safety Finger. In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to Figure 6 (b) and (f)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③,
④, and the cost is defined as: ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (11)
Hand over stands for the situation that two Shadow Hands with palms facing up, opposite each other, and an object that needs to be passed in Safe Finger, and it stands the object that needs to be thrown from the vertical hand to the palm-up hand in Safe Finger. Specific information can refer to Appendix B.2.
D POINT CLOUD
We replace the object state information with point clouds in the case of 128 parallel environments. The point cloud is captured by the depth camera and downsampled to 2048 points. The features are extracted using PointNet (Qi et al., 2017) to a 128-dimensional vector and concate with other observations. It can be seen that under the same episode and the same number of environments, the performance of point cloud input is not as good as full state input, but it can also achieve some performance. But also using an RTX 3090 GPU, the point cloud RL has only 200+ fps, and the full state can reach 30000+. In fact, we can only open up to 128 environments when using point clouds. This was a problem with Isaac Gym’s poor parallel support for cameras. We further refined the method to enhance the parallelization of the point cloud extraction in order to close this gap. When compared to Isaac Gym’s original code, the speedup is 1.46 times, going from 232 fps to 339 fps.
E DETAILS OF BENCHMARK ALGORITHMS
In this section, we review the key steps of typical Safe RL algorithms implemented in this benchmark, which include CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020). We implemented all of these algorithms and check the correctness but only evaluated some of them in TrustDeHands. Firstly, we will give a brief introduction to these algorithms below and give the hyperparameters of the algorithms we used in our evaluation. Then we have verified our re-implementations in other Safe RL benchmarks.
E.1 CPO (ACHIAM ET AL., 2017A)
For a given policy πθk , CPO updates new policy πθk+1 as follows:
πθk+1 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] (12)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b, (13)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ. (14)
It is impractical to solve the problem (12) directly due to the computational cost. Achiam et al. (2017a) suggest to find some convex approximations to replace the term Aπθk (s, a) and D̄KL(πθ, πθk) Eq.(12)-(14). Concretely, Achiam et al. (2017a) suggest to use first-order Taylor expansion of J(πθ) to replace the objective (12) as follows,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Aπθk (s, a) ] = J(πθ)− J(πθk) ≈ (θ − θk)⊤∇θJ(πθ).
Similarly, Achiam et al. (2017a) use the following approximations to turn the constrained policy optimization (12)-(14) to be a convex problem,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Acπθk (s, a) ] ≈ (θ − θk)⊤∇θJc(πθ), (15)
D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (16) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ,
Eq.(16) is the second-oder approximation of (14).
Let λ⋆, ν⋆ is the dual solution of the following problem
λ⋆, ν⋆ = arg max λ≥0,ν≥0 { −1 2λ ( g⊤H−1g − 2νr + sv2 ) + νc− λδ 2 } ;
where g = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] , a = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] , r = g⊤Ha, s = a⊤H−1a, and c = Jc(πθk)− b. Finally, CPO updates parameters according to conjugate gradient as follows: if approximation to CPO is feasible, then
θk+1 = θk + 1
λ⋆ H−1(g − ν⋆a),
else,
θk+1 = θk − √
2δ
a⊤H−1a H−1a.
E.2 PCPO (YANG ET AL., 2020B)
Projection-Based Constrained Policy Optimization (PCPO) is an iterative method for optimizing policies in a two-step process: the first step performs a local reward improvement update, while the second step reconciles any constraint violation by projecting the policy back onto the constraint set.
Reward Improvement.
πθ k+1
2 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] ,
s.t.D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ;
Projection.
πθk+1 = arg min πθ∈Πθ
D ( πθ, πθ
k+1 2
) ,
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b.
Then, Yang et al. (2020b) follows CPO (Achiam et al., 2017a) uses convex approximation to original problem, and calculate the update rule as follows,
θk+1 = θk −
√ 2δ
g⊤H−1g H−1g −max 0, √
2δ
g⊤H−1g a⊤H−1g + c
a⊤L−1a L−1a, where L = I if D is ℓ2-norm, and L = H if D is KL-divergence.
E.3 FOCOPS (ZHANG ET AL., 2020B)
Zhang et al. (2020b) propose the First Order Constrained Optimization in Policy Space (FOCOPS) that is a two-step approach. We present it as follows.
Step1: Finding the optimal update policy.
Firstly, for a given policy πθk, FOCOPS finds an optimal update policy π⋆ by solving the optimization problem (12)-(14) in the non-parameterized policy space.
π⋆ = argmax π∈Π Es∼dρ0πθk (·),a∼π(·|s) [ Aπθk (s, a) ] (17)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼π(·|s) [ Acπθk (s, a) ] ≤ b, (18)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(π, πθk)[s]] ≤ δ. (19)
If πθk is feasible, then the optimal policy for (17)-(19) takes the following form:
π⋆(a|s) = πθk(a|s) Zλ,ν(s) exp
( 1
λ
( Aπθk (s, a)− νA c πθk (s, a) )) , (20)
where Zλ,ν(s) is the partition function which ensures (20) is a valid probability distribution, λ and ν are solutions to the optimization problem:
min λ,ν≥0 λν + νb̃+ λEs∼dρ0πθk (·),a∼π⋆(·|s) [Zλ,ν(s)] ,
the term b̃ = (1− γ)(b− Jc(πθk)). Step 2: Projection.
Then, FOCOPS projects the policy found in the previous step back into the parameterized policy space Πθ by solving for the closest policy πθ ∈ Πθ to π⋆ in order to obtain πθk+1 :
πθk+1 = arg min πθ∈Πθ Es∼dρ0πθk (·) [KL(πθ, π ⋆)[s]].
We usually apply stochastic gradient decent to obtain the solution of above θk+1.
E.4 PPO-LAG
The Lagrangian approach is a standard way to solve CMDP (1), which is also known as primal-dual policy optimization:
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (21)
TRPO-Lag and PPO-Lag combine the Lagrangian approach with TRPO and PPO. Concretely, PPO using the following clip term to replace J(π) in (21),
Lrclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] , where πk is short for πθk . With Aπk(s, a) replacing A c πk (s, a) respectively, and obtain Lcclip as follows,
Lcclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) }] . Then, PPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ
{ Lrclip(πθ)− λ ( Lcclip(πθ)− b )} . (22)
All of the above terms can be estimated according to the policy πk. Then PPO-Lag updates the policy according to first-order optimizer as follows
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (23)
λk+1 = λk + η ( Lcclip(πθ)− b ) + ∣∣ θ=θk
, (24) where η > 0 is step-size.
E.5 TRPO-LAG
TRPO-Lag shares a similar idea but it is adaptive to TRPO, where TRPO-Lag replaces J(πθ) as follows, J(πθ) ≈ J(πθk) + (θ − θk)⊤∇θJ(πθ) =: Lr(πθ). (25) Similarly, Jc(πθ) ≈ Jc(πθk) + (θ − θk)⊤∇θJc(πθ) =: Lc(πθ), (26) and D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (27) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] .
Then, TRPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ {Lr(πθ)− λ (Lc(πθ)− b)} ∣∣ θ=θk,λ=λk , (28)
where the policy parameter θ satisfies the following condition (θ − θk)⊤H(θ − θk) ≤ δ.
E.6 P3O (ZHANG ET AL., 2022)
P3O solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem as follows,
πk+1 = arg min π∈Πθ { Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Aπk(s, a) ] + κB(π, b) } , (29)
where κ is a positive scalar, and the penalty term B(π, b) is defined as follows,
B(π, b) = max { 0,Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Acπk(s, a) ] + (1− γ) (Jc(πk)− b) } . (30) P3O utilizes a simple yet effective penalty approach to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective.
For the practical implementation, P3O consider the following optimization objective: LP3O(θ) = LrP3O(θ) + κmax {0,LcP3O(θ)} , (31) where
LrP3O(θ) = E [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] ,
LcP3O(θ) = E [ max { πθ(a|s) πk(a|s) Acπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) } + (1− γ) (Jc(πk)− b) ] , the notation E[·] is short for Es∼dρ0πk (·),a∼πk(·|s)[·]. All of the term in (31) can be estimated according to the samples selected by πk.
For each round, P3O chooses the parameter adaptively according to the following rule: κ← min {ρκ, κmax} , (32) where ρ > 1 and κmax is a positive scalar.
Finally, P3O updates the policy parameter as follows,
θ ← θ − η ∂ ∂θ LP3O(θ). (33)
E.7 IPO (LIU ET AL., 2020)
IPO considers the objective with logarithmic barrier functions (Nemirovski, 2004) to learn the safe policy. Concretely, IPO considers the following way to update policy,
πk+1 = arg max π∈Πθ
{ Lrclip(π) + ϕ(π) } , (34)
where the clip objective is Lrclip(π), and ϕ(π) is the logarithm barrier function with respect to the CMDP problem,
ϕ(π) = 1
m log
( b− Jc(π) ) , (35)
where m > 0 is a hyper-parameter that needs to be tuned.
E.8 CPPO-PID (STOOKE ET AL., 2020B)
CPPO-PID (Stooke et al., 2020b) also considers the primal-dual policy optimization method to solve the CMP problem,
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (36)
The main difference between CPPO-PID and previous Lagrangian methods is that replacing the update rule (??), CPPO-PID considers PID control technique (Johnson & Moradi, 2005) to update Lagrange multiplier.
First, CPPO-PID updates the parameter θ as follows,
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (37)
where η > 0 is step-size.
Then, CPPO-PID updates the parameter λ according to PID method (Algorithm 2 in (Stooke et al., 2020b)),
λk+1 = PID(KP ,KI ,KD, πθk , n), (38) where KP ,KI ,KD are the parameter needed to be tuned, and n is the iteration number.
F CORRECTNESS VERIFICATION
F.1 TASK LIST
Below are four safety environments that provide interesting tasks for Safe RL. Among them, three are popular safety environments, MuJoCo-Velocity (Todorov et al., 2012), Safety-Gym (Ray et al., 2019a), Bullet-Safety-Gym (Gronauer, 2022). We test the 8 algorithms on 26 tasks in these safety environments to verified our re-implementations on TrustDeHands. The complete list of these tasks is shown in table 17.
F.2 TASK PERFORMANCE
we show the performance of our re-implementations on the tasks in 18, 19, 20, 21, 22, 23. | 1. What is the main contribution of the paper regarding Safe RL benchmark environments?
2. What are the strengths and weaknesses of the proposed GPU-accelerated training environments and re-implementation of safe RL algorithms?
3. Do you have any concerns about the choice of dexterous manipulation as the focus of the Safe RL benchmark?
4. How do the authors validate their re-implemented algorithms, and what insights can be gained from analyzing their performance?
5. Are there any questionable or unsupported claims made in the paper regarding earlier work and implementation issues?
6. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose new Safe RL benchmark environments for the task of dexterous manipulation, provide Isaac Gym training environments for GPU accelerated training, and re-implement many common Safe RL algorithms which are tested on the new benchmarks.
Strengths And Weaknesses
Strengths:
GPU-accelerated benchmark environments seem potentially useful to the community.
Re-implementation and evaluation of several safe RL algorithms could also be useful, because as the authors' noted, reproducability can be an issue due to all the design choices made in RL papers.
Weaknesses:
Narrow: The choice of dexterous manipulation seems a bit niche for a Safe RL benchmark, especially from the claimed sim2real perspective. It's not a mature research area in robotics, the hardware like Shadow Hand are more research prototypes. As a Safe RL benchmark, it might have made more sense to have commonly used fFanka/UR/Yumi arms or a Spot/ANYmal with the spot arm, which although not as dexterous, are at least intended to have the durability required in real world use. It is not obvious why dexterous manipulation is a particularly good fit for safe RL either, because contact is needed. This is also evident from some of the proposed tasks like cleaning, where not allowing the robot to touch furniture seems a bit unrealistic.
Validation: The conclusions claims "We checked the correctness of our algorithms on the existing Safe RL benchmark" (I assume you refer to SafetyGym?), but I cannot find any data on verification of the re-implemented algorithms in the paper. If you do not show that your implementation gets similar results, how can we trust that it is correct? Assuming they are correct, there is a wealth of data on how they perform in the appendix, which would have been interesting to analyze further. Can we glean any new insights from this?
Questionable or unsupported claims: The paper makes many critical claims in relation to earlier work. It is important to motivate your results, but some of it seems heavy-handed. The intro spends a lot of time on shortcomings in implementation details of safe RL approaches. The arguments against Safety Gym also seem a bit unfocused (e.g. implemented in Tensorflow1, not maintained) to the point of being questionable (e.g. it requires old hardware?). Most of these are implementation issues, which I agree can be important, but how do we know your benchmark will be maintained in 3 years? I note that you used Isaac Gym, but there is already a new simulator out from Nvidia (Isaac Sim).
I also note that neither the Gym or the Safety Gym are published anywhere. I'm all for rewarding useful software and benchmarks are important, but for publication it needs to be carefully motivated such that one could credible imagine that it will have some impact. A lot of work clearly went into this paper, but it is unclear what we actually learn from it except that there is yet another RL benchmark environment. If it had more rigorously motivated why this is a good benchmark, validated its re-implementations of algorithms, and provided new analysis or insight on their performance, the scientific value of this paper would have been higher.
Minor:
Presentation unfortunately has many grammar and spelling errors.
Clarity, Quality, Novelty And Reproducibility
The paper is readable, but the language could be improved.
Just from the first section:
before the rl deploy.. -> before RL deployment
...can produce undesired miseries" are there desired miseries?
, which covers many reserach (..." typo + grammar
capitalization: years, Robo
dynamic environment[s]
safe[ty] and trustworth[iness] of..
Some important details are just incomprehensible, e.g. "This may be due to various approximation methods in CPO, which puts an emergency on Safe RL towards complex manipulation areas." - What does "putting an emergency" mean? |
ICLR | Title
A Massively Parallel Benchmark for Safe Dexterous Manipulation
Abstract
Safe Reinforcement Learning (Safe RL) aims to maximize expected total rewards meanwhile avoiding violating safety constraints. Although a plethora of safetyconstrained environments have been developed to evaluate Safe RL methods, most of them focus on navigation tasks, which are rather simple and have non-trivial gap with real-world applications. For robotics studies, dexterous manipulation is becoming ubiquitous; however, the idea of safe dexterous manipulations are rarely studied in robotics applications. In this paper, we propose TrustDeHands, a massively parallel benchmark for Safe RL studies on safe dexterous manipulation tasks. TrustDeHands is built within the Isaac Gym, a GPU-level parallel simulator that enables highly efficient RL training process. To stay close to real world settings, TrustDeHands offers multi-modal visual inputs, including RGB, RGB-D and point cloud, and supports a variety of arms and dexterous hands from different brands. Moreover, TrustDeHands provides a solid implementation of eight popular safe policy optimization algorithms; this facilitates trustworthy validation for Safe RL methods outside navigation tasks. TrustDeHands include a myriad of challenging tasks that require safety awareness (e.g., Jegna). Results on these tasks show that Safe RL methods can achieve better performance than classical RL algorithms, indicating the effectiveness of Safe RL in safe robot manipulation tasks. To our best knowledge, TrustDeHands is the first benchmark targeting at safe dexterous manipulation. We expect this benchmark to consistently serve as a reliable evaluation suite for future Safe RL developments and further promote the integration between the lines of research of Safe RL and dexterous manipulation. The code and demonstration can be found at https://sites.google.com/view/trustdehands/.
1 INTRODUCTION
Reinforcement Learning (RL) is a powerful way to solve sequence decision problems and has achieved superhuman performance in games (Silver et al., 2016; 2017; Vinyals et al., 2019; OpenAI, 2018), robotic (Andrychowicz et al., 2020; Chen et al., 2022b), and financial (Hambly et al., 2021). Before the RL deploy in the real world, researchers are tasked with proving their trustworthiness to maximize the benefits of AI systems while minimizing their risks (Xu et al., 2022). The fundamental principle of RL is that an agent tries to maximize the cumulative returns by trial and error, but the agents may play dangerous or harmful behaviors during the learning process. Thus, it is important to consider safe exploration that is known as safe RL. Most existing RL simulators usually ignore safety learning issue, where failure is acceptable and even desirable to learn from bad outcomes. But in the real world, such exploration can produce undesired miseries.
In recent years, Robot manipulation (Billard & Kragic, 2019) is an important direction for the application of RL, which covers many reserach (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). Among them, dexterous multi-fingered manipulation is the most challenging task, which puts forward higher requirements for control (Bircher et al., 2017; Rahman et al., 2016). To deal with dynamic environment and policy generalization, OpenAI et al. (2019); Chen et al. (2022a) have studied dexterous manipulation tasks to achieve some significant results. Moreover, trustworthy of the manipulation (Xu et al., 2022) is an important issue needed to be considered. If a robot wants to manipulate in the real world, ensuring safe and trustworthy is the highest priority. But in previous researches, there is no benchmark for safe manipulation. So we propose TrustDeHands, which uses safe policy learning to learn dexterous manipulation, hoping to fill this research gap.
Additionally, many existing Safe RL methods is unavailable, which result in researchers suffer from incorrect implementations, unfair comparisons and misleading conclusions. Safe policy learning is critical in real-world RL applications, where dangerous decisions are undesirable. For example, a robot agent should avoid taking actions that irreversibly damage its hardware (Ray et al., 2019a; Dulac-Arnold et al., 2019). Due to its importance, the community has been actively researching safe policy learning (e.g.,(Alshiekh et al., 2018; Stooke et al., 2020b; Gu et al., 2021; Yuan et al., 2021; Gronauer, 2022; Yang et al., 2022; Liu et al., 2022)). However, most of the existing work mainly focuses on algorithm design. Among these works, either the authors did not publish the source code (e.g. P3O (Zhang et al., 2022)), or the algorithms were implemented using different frameworks(e.g. PCPO (Yang et al., 2020b) in Theano (Al-Rfou et al., 2016), CPPO-PID (Stooke et al., 2020b) in PyTorch), with divergent approaches (FOCOPS (Zhang et al., 2020b) does not parallelize sample collection while others do), and on separate tasks (FOCOPS is tested solely on MuJoCoVelocity (Todorov et al., 2012) and CPPO-PID solely on Safety-Gym (Ray et al., 2019a)). While there exists safety-starter-agents (Ray et al., 2019a) as a publicly available collection of algorithms, it was implemented using TensorFlow1, required old hardware and system, lack recent updates, and was no longer maintained. As a result, the Safe RL community has experienced serious difficulty in reproducing the experimental results, comparing algorithms fairly, and deriving correct insights. An open-source, standardized algorithm implementation for algorithm verification and empirical study is desperately needed.
To facilitate the consideration we mentioned above, we developed a bimanual dexterous manipulation environment: TrustDeHands, with a unified re-implementation of Safe RL algorithms. We highlight three particularly desirable features of TrustDeHands:
• For Safe RL researchers. We provide a series of complex and challenging safe dexterous manipulation tasks. The design of these tasks stems from the need for safety robot manipulation in our daily life (e.g., sweeping the floor without touching other furniture). In these environments, we have done exhaustive experiments with the implemented algorithms and contributed the results, our observations, and analysis for the reference of the community.
• For robotic researchers. We are the first collection of tasks focused on safe dexterous manipulations. In addition to safety research, we also provide a variety of features, including 1) multi-modal information as the policy input (e.g., contact force, RGB image, RGB-D image, point cloud...). 2) customizable dexterous hands and a robotic arm drive to the dexterous hand. These features provide a comprehensive platform for robotic research.
• Unified, highly-optimized, and extensible Safe RL algorithms. We re-implement widely used Safe RL algorithms, which support TrustDeHands and all popular environments in a single welldesigned algorithms framework. We have done maximum abstraction and encapsulation, deriving a similar model structure and update paradigm, thus enabling code reuse, ensuring a clean code style, and making it extremely extensible.
2 RELATED WORK
Safe RL Environments Simulator plays a critical role to the training for RL since it is very expensive to collect data in the real world. Safety-gym (Ray et al., 2019b) introduces a robot that has to navigate through a cluttered environment to achieve a task, which is a suite of complex continuous control environments for Safe RL. Safe-control-gym (Yuan et al., 2021) introduces cart-pole, 1D, and 2D quadrotor dynamic systems to achieve control tasks like stabilization or trajectory tracking, which allows us for constraint specification and disturbance injection onto a robot’s inputs, states, and inertial properties through a portable configuration system. AI Safety Gridworlds (Leike et al., 2017) proposes an environment for evaluating various safe properties of intelligent agents, including safe interruptibility, avoiding side effects, safe exploration, distributional shift, etc. MuJoCoVelocity, originally proposed in (Zhang et al., 2020a), consists of a series of safety tasks like constrainted velocity based on MuJoCo environment (Todorov et al., 2012). However, there still lacks a safe environment for safe robot manipulation, which the difficulty lies in requiring safe highdimensional continuous space control and dealing with the dynamic environment. So we introduce TrustDeHands, which aims to apply Safe RL to dexterous manipulation, providing a more challenging environment for evaluating Safe RL algorithms.
Safe RL Algorithms Since we formulate safe RL under CMDPs (Altman, 1999), in this section we mainly review algorithms w.r.t. CMDPs. For more discussions about safe RL algorithms, please re-
fer to the recent survey (Xu et al., 2022; Gu et al., 2022). With the rise of deep RL, CMDPs are also moving to more high-dimensional continuous control problems. CPO (Achiam et al., 2017b) proposes the general-purpose policy search algorithm for Safe RL with guarantees for near-constraint satisfaction at each iteration. PCPO (Yang et al., 2020a) utilizes a different two-step approach(i.e. first finds the policy with the maximum return, then projects this policy back into the safety region in terms of the minimum KL divergence.). FOCOPS (Zhang et al., 2020a) has adopted a similar idea by directly solving the constrained policy optimization problem via the primal-dual approach (Boyd et al., 2004) then projecting the solution back into the parametric policy space. Traditional robot control also considers the safety problem. Chow et al. (2018; 2019) presents a method via constructing Lyapunov function to guarantees the constraint satisfaction during training. Stooke et al. (2020a) combines PID control with Lagrangian methods which dampens cost oscillations resulting in reduced constraint violations. It is still lacking a unified and efficient framework to cover these algorithms. Therefore, we provide PyTorch-version re-implementations of widely used safe policy optimization algorithms, hoping to facilitate experimental validation in Safe RL research.
Dexterous Manipulation Manipulation is one of the essential research topics in robotics, researchers have long tried to establish a stable theory of manipulation (Billard & Kragic, 2019). However, traditional methods mostly rely on various assumptions, such as knowing the environmental dynamics model or having no uncertainty in the process. In recent years, learning-based approaches have been successful in this regard, coping with uncertainty in perception and even generalizing to unseen objects (Bohg et al., 2013). There are many learning-based benchmarks for robotic manipulation in recent years (Yu et al., 2020; James et al., 2020; Zhu et al., 2020), but none of them use dexterous hands or consider safe constraints. Dexterous multi-finger hands provide intrinsic dexterity for better manipulation in unstructured scenes and contact-rich situations, but additionally bring the challenges of high-dimensional control and complex contact models (Bircher et al., 2017; Rahman et al., 2016). Previous research methods have mostly focused on trajectory optimization or model prediction, which highly relied on accurate dynamics models (Kim et al., 2021; Okamura et al., 2000; Kumar et al., 2016). For example, Williams et al. (2015) performs in-hand manipulation of a cube using a trajectory optimization technique known as Model Predictive Path Integral (MPPI). Charlesworth & Montana (2021) extended the MPPI method to allow objects to be thrown and catch between two hands. OpenAI et al. (2019) solved a Rubik’s cube using model-free RL and domain randomization techniques. Chen et al. (2022a) proposed an in-hand manipulation system to learn how to manipulate a large number of objects of different shapes, and even generalize to unseen objects. Qin et al. (2022; 2021) studied dexterous manipulation learning from human demonstration. Chen et al. (2022c) studied the bimanual dexterous manipulation to solve cooperative manipulation and skill generalization problem. While most of them focus on unconstrained dexterous manipulation, how to do dexterous manipulation safely is an unstudied topic. In this paper, we provide a massively parallel benchmark for safe dexterous manipulation, hoping to facilitate research on how to manipulate safely.
3 THE SAFETY LEARNING ENVIRONMENT
TrustDeHands is consist of two parts: the safety learning environment and the safe policy optimization algorithms. In this section, we present the high-level design of a safety learning environment.
3.1 SYSTEM DESIGN AND DATASETS
TrustDeHands is a collection of challenging dexterous manipulation tasks, underpinned by Isaac Gym (Makoviychuk et al., 2021) and capable of high parallelism on the GPU. In TrustDeHands, all tasks require two dexterous hands to manipulate one or more objects. We design a series of tasks that require policies to perform safe dexterous manipulation, including throwing, grasping, jerking, pulling, etc. At the same time, each task provides the customizability of dexterous hands and objects to support a diverse task.
The construction of the dataset includes the configuration of dexterous hands and objects. The core goal of our dataset is to generate a wide variety of scenarios for learning constrained dexterous manipulation. We collected a variety of dexterous multi-finger hands as manipulators, including most of the dexterous hands currently used in robotics. In addition to manipulators, objects also play a crucial role in building datasets. Our manipulation objects are mainly from the YCB (Calli et al., 2017) and SAPIEN (Xiang et al., 2020) datasets. Both datasets contain many objects used in everyday life.
3.2 TASKS REPRESENTATION
TrustDeHands contains 10+ tasks focused on dexterous manipulation. Each task contains two dexterous hands and one or more manipulated objects, such as balls, blocks, etc., with the ultimate goal is to manipulate objects placed at the task-specified locations while to make the agent satisfy. The default dexterous hand used by our framework is the Shadow Hand (ShadowRobot, 2005), more details are provided in Appendix A. The agent performs each task according to its observation, action representation, and its reward and cost function definition. We provide more underlying technical details about the tasks in Appendix B.
Constrained Markov Decision Processes (CMDPs) Constrained Markov Decision Processes (CMDPs) is defined as (S,A,P, r, ρ0, γ, C), where S is the state space, A is the action space, P : S × A × S → [0, 1] is the transition probability function, r : S × A × S → R is the reward function, ρ0(·) ∈ P(S) is the initial state distribution (P(X) denotes the set of probability distributions over a set X), γ ∈ [0, 1) is the discount factor, and C = {(c, b)} is the constraint set, where c : S ×A× S → R is a cost function, and b is the cost threshold. We use π : S → P(A) to denote a stationary policy, and use Π to denote the set of all stationary policies. Let τ = {st, at, rt+1, ct+1}t≥0 ∼ π be a trajectory generated by π, where s0 ∼ ρ0(·), at ∼ π(·|st), st+1 ∼ P(·|st, at), rt+1 = r(st+1|st, at), and ct+1 = c(st+1|st, at). The state value function of π is defined as Vπ(s) = Eπ[ ∑∞ t=0 γ
trt+1|s0 = s]. The goal of reinforcement learning is to maximize the expected total reward, defined as J(π) = Es∼ρ0(·)[Vπ(s)].
We define the cost return function as Jc(π) = Es∼ρ0(·) [ ∑∞ t=0 γ tct+1|s0 = s], and the feasible policy set ΠC as ΠC = {π |π ∈ Π, Jc(π) ≤ b,∀(c, b) ∈ C }. The goal of Safe RL is to learn the optimal policy π⋆ such that
π⋆ = arg max π∈ΠC J(π). (1)
Observation Here we briefly describe the observation space of the tasks, more details can be seen in Appendix B.1. The observation of all tasks consisted of three parts: state information of the left and right Shadow Hand, and information about the task specification. In each task, the state information of the left and right Shadow Hand is the same, each Shadow Hand contains 24 minimum drive units (which contains four underdriven fingertip units) and its state consists of the following information:
• Dp,Dv,Df ∈ R24, corresponds to all joint DoF (Degree of Freedom) of angle, velocity, and force with drive units, respectively.
• Pw,Rw ∈ R3 represents the position and rotation of the base of the hand. • FT i = [FTpose, FTvl , FTva , FTf , FTt] ∈ R19, corresponds to the pose, linear velocity, angular
velocity, force magnitude, and torque of each fingertip, respectively.
• A ∈ R20/26, indicates the action executed by the hand in the previous step, which is consistent with the action space.
With above definitions, the state information of one Shadow Hand can be represented as Hand = {Dp,Dv,D,Pw,Rw, {FT i}5i=1,A}. We character the observation of each task by the following information:
{Handleft, Handright, Gtask}, (2) where Gtask represents some observation information specific to different tasks.
Action The dual Shadow Hands have more than 40 dimensions of action space, where each Shadow Hand has five fingers with a total of 24 degrees of freedom, the thumb has 5 joints and 5 degrees of freedom, and all other fingers have 3 degrees of freedom and 4 joints (where the joint at the end of each finger is uncontrollable). Therefore, the action space of each hand is 20 dimensions. If the base of the hand is not fixed, there are six dimensions to represent the translation and rotation of the hand base. For the lower and upper limit of the joint angle, see Table 2. In each step, we use the absolute value of each joint angle as the target, and use the PD controller to make it move.
Reward We designed some auxiliary rewards to help RL agents learn more consistently, and each task contains a task-specific bonus. In general, our reward design is goal-based and follows the same set of logic. For object-catching tasks, our reward is simply related to the difference between
the pose of the object and the target. For other tasks that require the hand to hold the object, our reward generally consists of three parts: the distance from the left hand to a left-hand grip point on the object, the distance from the right hand to a right-hand grip point on the object, and the distance from the object to the object’s target.
Cost Each task contains different constraints (e.g., the ball needs to be thrown to a specified height or a specified angle to prevent damage to other items; the robots need to clean the floor without hitting other furniture). The specific constraint design depends on the safety requirements of each task.
3.3 EVALUATION SUITE
TrustDeHands has a collection of 10+ different tasks. These tasks form an evaluation suite for benchmarking the performance of Safe RL algorithms. In this section, we describe six of these representative tasks (see Figure 1), and the remaining tasks are described in detail in Appendix B.
Safe Finger In this task, one hand needs to throw the ball to the other hand, and both hands except the same horizontal plane. We design two different constraints, including constraints on the minimum drive units, and constraints on the finger joints.
Hand Up Wall In this task, one hand needs to throw the ball at a certain height to the other hand. The constraint of this task is concerned with the magnitude of the force of the minimum drive unit.
Hand Over Wall In this task, one hand needs to throw the ball at a certain angle to the other hand. It is more concerned with the constraints in a certain behavioral paradigm and needs to take into account the collaboration of the whole hand-driven unit.
Pick Bottles There are five bottles in a tight row, and the dual hands need to pick up two of the bottles smoothly without touching the others.
Jenga The dual hands need to collaborate to extract the specified blocks from an unstable stacking structure and avoid breaking the rest of the blocks apart.
Clean House There is a broom, trash, and dustpan in this environment. We need to use both hands to manipulate the broom to sweep the trash into the dustpan. There will also be a chair as an obstacle on the way.
3.4 VISUAL INFORMATION
It is very difficult to obtain the state information of the robot in the real world. One way to solve this problem is to use the vision sensor as the input to train the policy. Therefore, we provide multiple modalities of visual information as input, including RGB, RGB-D, and point cloud, see Figure 2. It is generated using the camera in the Isaac Gym, and the pose and toward of the camera can be customized by the user to obtain the desired visual information. We also propose a point cloud parallel acceleration function to adapt Isaac Gym and provide an example of using it to train Hand Over task, see Appendix.
3.5 CUSTOMIZABLE DIVERSIFORM MANIPULATORS AND ADAPTATION CHALLENGE
There are more type of dexterous hands than the shadow hand like allegro hand, trifinger, et al, and supporting other dexterous hands helps to advance research and community development. Therefore, in addition to the Shadow Hand, we also provide five kinds of other dexterous multi-finger hands in TrustDeHands. In addition, using a robotic arm drive at the base of the dexterous hand not only matches the real-world setting but also an important step for sim to real transfer. Becasue it is very difficult to match the real dynamics of the flying hand, the TrustDeHands provide a way to reduce the reality gap by adjusting the dynamics and physics parameters of the arm, which simplifies the deployment process from simulation to the real world applications.
Moreover, we offer a variety of arms and a variety of dexterous hand combinations, which has many benefits. For example, researchers can choose the hand they want according to their own conditions, which brings wider applicability to our benchmark. At the same time, we can use different arms and different hands to study the adaptability and generalization ability of policies, which challenges multi-task learning and meta learning research in the future. An schematic of this feature is shown in Figure 3.
4 EXPERIMENTS
4.1 SAFE RL ALGORITHMS IMPLEMENTATION
Based on their original papers or public code base, we reimplement eight algorithms (CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020)), covering major safe policy optimization algorithms. A brief introduction to each algorithm is given in Appendix E.
We abstract similar structure of the safe policy optimization algorithms, and modularize the code into interaction with environments, parallel sample collection, buffer storage and computation, algorithm core update, and auxiliary functionalities such as visualization and logger. Maximum abstraction and encapsulation take place at the implementation of algorithms core, where each algorithm inherits directly from its
base algorithm, thus only unique features have to be implemented and all other code can be reused. An overview of the core of algorithms and logger is shown in Figure 4.
For algorithms implementation, it is critical to ensure its correctness and reliability. To achieve this goal, we examine the implementation of our algorithms carefully. To test the performance of our implementation, we run the eight algorithms on 30 tasks (for a complete list of the tasks, please refer to Appendix F.1) contained in the four environment suites and present our experimental results for the reference of the community in Appendix F.2.
4.2 EVALUATION PROTOCOL
Metrics We define the following metrics to depict the safety performance of an agent in different tasks. (1) the average return of trajectories, Jr(θ); (2) the average cumulative cost of trajectories, Jc(θ). In Safe RL domain, for any two agents, the superiority of the agents is determined by the following priority comparisons. On the one hand, the agent that satisfies the constraint will definitely outperform the unconstrained one. On the other hand, two agents that satisfy the constraint are determined by comparing the magnitude of their cumulative returns.
Algorithms For the UnSafe RL algorithm we uniquely use PPO (Schulman et al., 2017), where the reward function contains no information about the auxiliary costs. For Safe RL algorithms, we evaluate the performance of PPO-Lag (Ray et al., 2019b), FOCOPS, P3O, and PCPO algorithms on TrustDeHands, and the remaining Safe RL algorithms we implemented are in our anonymous Github repository.
4.3 RESULTS
We mainly conduct three experiments and analyze the results in this section: 1) The performance of PPO, PPO-Lag, FOCOPS, P3O, CPPO-PID, and algorithms on six representative tasks 2) The performance of eight Safe RL algorithms on the Safe Finger task 3) The performance of point cloud RL on the Hand Over Wall task.
For 1), We evaluate the performance of PPO, PPO-Lag, FOCOPS, CPPO-PID, and P3O algorithms on six tasks, and we implemented the rest of the Safe RL algorithms in our anonymous Github repository. The performance of each algorithm is shown in Figure 5. It can be observed that PPOLag can achieve high performance within the range allowed by the cost, and is the best performing algorithm here. Comparing the performance of PPO and PPO-Lag, it can be found that PPO-Lag can perform similarly to PPO in Jenga, and Safe Finger tasks, but the cost is constrained to a lower range, which indicates that the model has learned how to safely manipulate. A remarkable result is that in the Janga and Pick Bottle task, the performance of PPO-Lag is far superior to PPO. This
For 2), On the other hand, we tested the entire eight algorithms we implemented on the Safe Finger task, which is shown in Figure 1. We the specific detials can be found in Appendix C.
It can be seen that CPO has almost no work, and the PPO algorithm will cause a high cost. This may be due to various approximation methods in CPO, which puts an emergency on Safe RL towards complex manipulation areas.
5 POTENTIAL RESEARCH PROBLEMS TO STUDY USING TRUSTDEHANDS
TrustDeHands provides ample opportunities to study trustworthy manipulation of dexterous hands based on Safe RL. We found that the primal-dual based approach (Boyd et al., 2004) result in great volatility in the update of lagrange multipliers. A potential research direction is to consider combining feedback control methods in control systems, such as PID (Ziegler et al., 1942; Ang et al., 2005), ADRC (Han, 2009), etc., to mitigate the instability and volatility of Lagrange multipliers in the learning process. Therefore, it would be interesting to combine control methods of complex systems with Safe RL methods to solve complex manipulation problems of dexterous hands.
Sim to real is an important research direction about transferring the simulation result to the real robot. Around this theme, our benchmark includes many of the robot arms and dexterous hands, which were accepted by many research labs. It is convenient for different researchers to choose their own arms and hands for training in the simulation. Meanwhile, the tasks in our benchmark, such as picking bottles1, Jenga2, etc., are meaningful in the real world but also needed to ensure safety if transferring the trained policy from simulation to the real world. So our benchmark can also be used to study how to perform sim to real more safely from the perspective of Safe RL.
Training policy with a state-based observation space is difficult for sim to real transfer because such inputs are not available in the real world. So it also makes sense to study the more readily available policy inputs in the real world, such as point clouds. Our environment supports a multimodal input such as visual and forces information, which can support research in this direction. We hope that our benchmark can serve as a tool to study the sim to real transfer of dexterous hands.
Finally, generalization is an important direction to explore, which is a potential strength of RL. TrustDeHands supports self-customization, enabling switching and linking different hands and arms to evaluate the generality of different algorithms. Users can use TrustDeHands as a platform for modification or secondary development to design richer and more challenging target tasks, and we hope that this work will contribute to the flourishing of the RL community.
6 CONCLUSION AND FUTURE WORK
In this work, we presented TrustDeHands, which is the first benchmark focused on safe dexterous manipulation. We standardize the safe policy optimization methods for solving CMDPs and introduce a unified, highly-optimized, extensible, and comprehensive algorithms re-implementation. We checked the correctness of our algorithms on the existing Safe RL benchmark and tested it on TrustDeHands. The results show that the Safe RL algorithm can better solve the safety problem in dexterous manipulation. For example, using Safe RL can grab the target bottle without touching other bottles, and avoid collision with obstacles when sweeping the floor. However, it is difficult for unconstrained RL algorithms to have this guarantee. These situations are very important for robots in real-world environments because RL-based methods tend to lead to unpredictable behaviors that are prone to danger and damage to robots.
Additionally, we support two features regarding visual policy input and various arms and dexterous hands. Some sort of visual input is becoming increasingly common in real-world RL-trained robots, and so benchmarks for this setting are important. Diverse arms and hands increase the applicability of our benchmarks and allow us to study policy generalization between different robots.
We believe that TrustDeHands can significantly accelerate the progress of future research on safe manipulation, facilitate the integration of reinforcement learning with robotic control, and will make a singular contribution to the reinforcement learning community.
B TASK SPECIFICATIONS
B.1 BASIC STATE SPACE AND ACTION SPACE
The state space dimension of each environment is up to 400 dimensions in total, and the action space dimension is up to 40 dimensions. All environments are goal-based, and each epoch will randomly reset the object’s starting pose and target pose to improve generalization. We only use the shadow hand and object state information as observation at present. The observation of all tasks is composed of three parts: the state information of the left and right hands, and the information of objects and
target. The state information of the left and right hands were the same for each task, including hand joint and finger positions, velocity, and force information. The state information of the object and goal are different for each task, which we will describe in the following. Table 4 shows the specific information of the left-hand and right-hand state.
B.2 SAFE FINGER
This environment contains two dexterous hands. At the beginning of each episode, a ball falls randomly around the right hand, and the two hands have to collaborate to place the ball to a given position. Since the target is out of the reach of the right hand, and the right hand cannot pass the ball to the left hand directly, a possible solution is that the right hand grabs the ball, throws it to the left hand; the left hand catches the ball, and puts it to the target. Note that the base of the hand is fixed.
Observations The 398-dimensional observational space for Hand Over task is shown in Table 5. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Safe Finger task is shown in Table 6.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (3) where α is a constant balances positional and rotational rewards.
Cost In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to figure 6 (b)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③, ④, and the cost is defined as:
ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (4)
B.3 HAND UP WALL
Similarly, this environment is similar to the Hand Over Wall, the difference is that the wall in this environment only retains the lower half, so the ball needs to be thrown high to prevent it from hitting the wall, requiring different motion skill.
Observations The 398-dimensional observational space for Hand Up Wall task is shown in Table 7. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Up Wall task is shown in Table 8.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (5) where α is a constant balances positional and rotational rewards.
Cost The ball is thrown from the right hand to the left hand, and the curve of the ball throwing out is not consistent, and we set a wall with height in the middle of the two hands. This requires more delicate hand manipulation, and when the right hand does not throw the ball with the proper force or angle, it will be difficult to throw the ball over the wall. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.4 HAND OVER WALL
This environment is similar to Safe Finger, except that it has a wall between each hand and a hole in the middle of the wall. We need to learn policy to keep the ball from hitting the wall during the toss.
Observations The 398-dimensional observational space for Hand Over Wall task is shown in Table 9. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 40-dimensional action space for one hand in Hand Over Wall task is shown in Table 10.
Reward For timestep t, let xb,t be the position of the ball and xg,t be the position of the goal. We use dp,t to denote the positional distance between the ball and the goal dp,t = ∥xb,t − xg,t∥2. Let da,t denote the angular distance between the object and the goal, and the rotational difference is dr,t = 2arcsinmin{|da,t|, 1.0}. The reward is defined as follows, rt = exp{−0.2(αdp,t + dr,t)}, (6) where α is a constant balances positional and rotational rewards.
Cost This constraint is more demanding than Wall Down, where we require the ball thrown to fit through a specified narrow hole. If the ball hits the wall, the cost is 1, otherwise it is 0. The size of the wall and the hole can be customized by the user.
B.5 PICK BOTTLES
This environment contains two hands, a table and five bottles. The five bottles were placed in a row on the table horizontally with very little space between them. We need to pick up two bottles with two dexterous hands, and not touch the bottle around it to cause possible damage.
Observations The 400-dimensional observational space for Pick Bottles task is shown in Table 11. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Pick Bottles task is shown in Table 12.
Reward The reward consists of three parts: the distance from the left hand to the left bottle cap, the distance from the right hand to the right bottle cap, and the height of the two bottles that need to be picked. The height of the two bottles that need to be picked is given by dheight. The position difference between the left hand to the left bottle cap dleft is given by dleft = ∥xlhand − xlbcap∥2.The position difference between the right hand to the right bottle cap dright is given by dright = ∥xrhand − xrbcap∥2. The reward is given by this specific formula:
r = dheight ∗ 20− dleft − dright (7)
Cost The constraint of this environment is that we can’t touch other bottles when we pick the bottle. When our hand, or the bottle we picked, touches other bottles, the cost is set to 1, otherwise it is 0.
B.6 JENGA
Jenga is a fitness game which is very suitable for Safe RL algorithm evaluation. Players take turns removing one block at a time from a tower made up of many blocks. In this environment, we need to remove the one we want from the 16 blocks without knocking over the others.
Jenga The 411-dimensional observational space for Jenga task is shown in Table 13. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Jenga task is shown in Table 14.
Reward For timestep t, let xb,t as the position of the left middle finger, xg,t as the position of the left end of the object, and dp,t = ∥xb,t − xg,t∥2. Define dy,t as the y-axis direction of the position of the object center, the reward is defined as follows:
rt = 30 ∗ (dy,t + 0.6)− dp,t (8)
Cost The constraint of this environment is that we can not touch other blocks in the Jenda. The cost is 1 if all blocks move more than 0.01 cm, and 0 otherwise.
B.7 CLEAN HOUSE
This environment is in a scene we usually clean at home. We need to control the broom with both hands to sweep the trash from the ground into the dustpan without touching other furniture (e.g. chairs).
Observations The 431-dimensional observational space for Clean House task is shown in Table 15. It should be noted that since the base of the dual hands in this task is fixed, the observation of the dual hands is compared to the Table 4 of reduced 24 dimensions.
Actions The 52-dimensional action space for one hand in Clean House task is shown in Table 16.
Reward The reward consists of four parts: the distance from the left hand to the left handle position of the broom, the distance from the right hand to the right handle position of the broom, the object (trash) position to the bottom of broom position, and the distance from the object to the target (dustpan) point. The distance from the object to the target point is given by dtarget. The position difference from the left hand to the left handle position of the broom is given by dleft. The position difference from the right hand to the right handle position of the broom is given by dright. The object position to the bottom of broom position is given by dbottom. The reward is given by this specific formula:
r = 50− dtarget ∗ 10− 5 ∗ dleft − 5 ∗ dright (9)
Cost The constraint of this environment is that we can not damage other furniture when we sweep the floor. So there is a chair in the path of the trash and the dustpan. The cost is 1 when the broom touches the chair and make it move, and 0 otherwise.
C FOUR ENVIRONMENTS IN SAFE FINGER.
All environments are comes from Safe Finger. The difference between Safe Finger and Safe Joint is whether it is a joint constrainedor a finger constrained, which is described as follows:
Safety Joint. In these tasks, we constrain the freedom of joint ④ of forefinger (please refer to Figure 6 (a) and (f)). Without the constraint, joint ④ has freedom of [−20◦, 20◦]. The safety tasks restrict joint ④ within [−10◦, 10◦]. Let ang 4 be the angle of joint ④, and the cost is defined as:
ct = I(ang 4 ̸∈ [−10◦, 10◦]). (10)
Safety Finger. In these tasks, we constrain the freedom of joints ②, ③ and ④ of forefinger (please refer to Figure 6 (b) and (f)). Without the constraint, joints ② and ③ have freedom of [0◦, 90◦] and joint ④ of [−20◦, 20◦]. The safety tasks restrict joints ②, ③, and ④ within [22.5◦, 67.5◦], [22.5◦, 67.5◦], and [−10◦, 10◦] respectively. Let ang 2, ang 3, ang 4 be the angles of joints ②, ③,
④, and the cost is defined as: ct = I(ang 2 ̸∈ [22.5◦, 67.5◦], or ang 3 ̸∈ [22.5◦, 67.5◦], or ang 4 ̸∈ [−10◦, 10◦]). (11)
Hand over stands for the situation that two Shadow Hands with palms facing up, opposite each other, and an object that needs to be passed in Safe Finger, and it stands the object that needs to be thrown from the vertical hand to the palm-up hand in Safe Finger. Specific information can refer to Appendix B.2.
D POINT CLOUD
We replace the object state information with point clouds in the case of 128 parallel environments. The point cloud is captured by the depth camera and downsampled to 2048 points. The features are extracted using PointNet (Qi et al., 2017) to a 128-dimensional vector and concate with other observations. It can be seen that under the same episode and the same number of environments, the performance of point cloud input is not as good as full state input, but it can also achieve some performance. But also using an RTX 3090 GPU, the point cloud RL has only 200+ fps, and the full state can reach 30000+. In fact, we can only open up to 128 environments when using point clouds. This was a problem with Isaac Gym’s poor parallel support for cameras. We further refined the method to enhance the parallelization of the point cloud extraction in order to close this gap. When compared to Isaac Gym’s original code, the speedup is 1.46 times, going from 232 fps to 339 fps.
E DETAILS OF BENCHMARK ALGORITHMS
In this section, we review the key steps of typical Safe RL algorithms implemented in this benchmark, which include CPO (Achiam et al., 2017a), PCPO (Yang et al., 2020b), FOCOPS (Zhang et al., 2020b), P3O (Zhang et al., 2022), PPO-Lag (Ray et al., 2019a), TRPO-Lag (Ray et al., 2019a), CPPO-PID (Stooke et al., 2020b), and IPO (Liu et al., 2020). We implemented all of these algorithms and check the correctness but only evaluated some of them in TrustDeHands. Firstly, we will give a brief introduction to these algorithms below and give the hyperparameters of the algorithms we used in our evaluation. Then we have verified our re-implementations in other Safe RL benchmarks.
E.1 CPO (ACHIAM ET AL., 2017A)
For a given policy πθk , CPO updates new policy πθk+1 as follows:
πθk+1 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] (12)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b, (13)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ. (14)
It is impractical to solve the problem (12) directly due to the computational cost. Achiam et al. (2017a) suggest to find some convex approximations to replace the term Aπθk (s, a) and D̄KL(πθ, πθk) Eq.(12)-(14). Concretely, Achiam et al. (2017a) suggest to use first-order Taylor expansion of J(πθ) to replace the objective (12) as follows,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Aπθk (s, a) ] = J(πθ)− J(πθk) ≈ (θ − θk)⊤∇θJ(πθ).
Similarly, Achiam et al. (2017a) use the following approximations to turn the constrained policy optimization (12)-(14) to be a convex problem,
1
1− γ Es∼dρ0πθk (·),a∼πθk (·|s) [ πθ(a|s) πθk(a|s) Acπθk (s, a) ] ≈ (θ − θk)⊤∇θJc(πθ), (15)
D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (16) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ,
Eq.(16) is the second-oder approximation of (14).
Let λ⋆, ν⋆ is the dual solution of the following problem
λ⋆, ν⋆ = arg max λ≥0,ν≥0 { −1 2λ ( g⊤H−1g − 2νr + sv2 ) + νc− λδ 2 } ;
where g = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] , a = ∇θEs∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] , r = g⊤Ha, s = a⊤H−1a, and c = Jc(πθk)− b. Finally, CPO updates parameters according to conjugate gradient as follows: if approximation to CPO is feasible, then
θk+1 = θk + 1
λ⋆ H−1(g − ν⋆a),
else,
θk+1 = θk − √
2δ
a⊤H−1a H−1a.
E.2 PCPO (YANG ET AL., 2020B)
Projection-Based Constrained Policy Optimization (PCPO) is an iterative method for optimizing policies in a two-step process: the first step performs a local reward improvement update, while the second step reconciles any constraint violation by projecting the policy back onto the constraint set.
Reward Improvement.
πθ k+1
2 = arg max πθ∈Πθ Es∼dρ0πθk (·),a∼πθ(·|s) [ Aπθk (s, a) ] ,
s.t.D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] ≤ δ;
Projection.
πθk+1 = arg min πθ∈Πθ
D ( πθ, πθ
k+1 2
) ,
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼πθ(·|s) [ Acπθk (s, a) ] ≤ b.
Then, Yang et al. (2020b) follows CPO (Achiam et al., 2017a) uses convex approximation to original problem, and calculate the update rule as follows,
θk+1 = θk −
√ 2δ
g⊤H−1g H−1g −max 0, √
2δ
g⊤H−1g a⊤H−1g + c
a⊤L−1a L−1a, where L = I if D is ℓ2-norm, and L = H if D is KL-divergence.
E.3 FOCOPS (ZHANG ET AL., 2020B)
Zhang et al. (2020b) propose the First Order Constrained Optimization in Policy Space (FOCOPS) that is a two-step approach. We present it as follows.
Step1: Finding the optimal update policy.
Firstly, for a given policy πθk, FOCOPS finds an optimal update policy π⋆ by solving the optimization problem (12)-(14) in the non-parameterized policy space.
π⋆ = argmax π∈Π Es∼dρ0πθk (·),a∼π(·|s) [ Aπθk (s, a) ] (17)
s.t. Jc(πθk) + 1
1− γ Es∼dρ0πθk (·),a∼π(·|s) [ Acπθk (s, a) ] ≤ b, (18)
D̄KL(πθ, πθk) = Es∼dρ0πθk (·) [KL(π, πθk)[s]] ≤ δ. (19)
If πθk is feasible, then the optimal policy for (17)-(19) takes the following form:
π⋆(a|s) = πθk(a|s) Zλ,ν(s) exp
( 1
λ
( Aπθk (s, a)− νA c πθk (s, a) )) , (20)
where Zλ,ν(s) is the partition function which ensures (20) is a valid probability distribution, λ and ν are solutions to the optimization problem:
min λ,ν≥0 λν + νb̃+ λEs∼dρ0πθk (·),a∼π⋆(·|s) [Zλ,ν(s)] ,
the term b̃ = (1− γ)(b− Jc(πθk)). Step 2: Projection.
Then, FOCOPS projects the policy found in the previous step back into the parameterized policy space Πθ by solving for the closest policy πθ ∈ Πθ to π⋆ in order to obtain πθk+1 :
πθk+1 = arg min πθ∈Πθ Es∼dρ0πθk (·) [KL(πθ, π ⋆)[s]].
We usually apply stochastic gradient decent to obtain the solution of above θk+1.
E.4 PPO-LAG
The Lagrangian approach is a standard way to solve CMDP (1), which is also known as primal-dual policy optimization:
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (21)
TRPO-Lag and PPO-Lag combine the Lagrangian approach with TRPO and PPO. Concretely, PPO using the following clip term to replace J(π) in (21),
Lrclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] , where πk is short for πθk . With Aπk(s, a) replacing A c πk (s, a) respectively, and obtain Lcclip as follows,
Lcclip(πθ) = Es∼dρ0πk (·),a∼πk(·|s) [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) }] . Then, PPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ
{ Lrclip(πθ)− λ ( Lcclip(πθ)− b )} . (22)
All of the above terms can be estimated according to the policy πk. Then PPO-Lag updates the policy according to first-order optimizer as follows
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (23)
λk+1 = λk + η ( Lcclip(πθ)− b ) + ∣∣ θ=θk
, (24) where η > 0 is step-size.
E.5 TRPO-LAG
TRPO-Lag shares a similar idea but it is adaptive to TRPO, where TRPO-Lag replaces J(πθ) as follows, J(πθ) ≈ J(πθk) + (θ − θk)⊤∇θJ(πθ) =: Lr(πθ). (25) Similarly, Jc(πθ) ≈ Jc(πθk) + (θ − θk)⊤∇θJc(πθ) =: Lc(πθ), (26) and D̄KL(πθ, πθk) ≈ (θ − θk)⊤H(θ − θk), (27) where H is Hessian matrix of D̄KL(πθ, πθk), i.e.,
H[i, j] =: ∂2
∂θi∂θj Es∼dρ0πθk (·) [KL(πθ, πθk)[s]] .
Then, TRPO-Lag updates the policy as follows, (πk+1, λk+1) = argmin
λ≥0 max πθ∈Πθ {Lr(πθ)− λ (Lc(πθ)− b)} ∣∣ θ=θk,λ=λk , (28)
where the policy parameter θ satisfies the following condition (θ − θk)⊤H(θ − θk) ≤ δ.
E.6 P3O (ZHANG ET AL., 2022)
P3O solves the cumbersome constrained policy iteration via a single minimization of an equivalent unconstrained problem as follows,
πk+1 = arg min π∈Πθ { Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Aπk(s, a) ] + κB(π, b) } , (29)
where κ is a positive scalar, and the penalty term B(π, b) is defined as follows,
B(π, b) = max { 0,Es∼dρ0πk (·),a∼πk(·|s) [ π(a|s) πk(a|s) Acπk(s, a) ] + (1− γ) (Jc(πk)− b) } . (30) P3O utilizes a simple yet effective penalty approach to eliminate cost constraints and removes the trust-region constraint by the clipped surrogate objective.
For the practical implementation, P3O consider the following optimization objective: LP3O(θ) = LrP3O(θ) + κmax {0,LcP3O(θ)} , (31) where
LrP3O(θ) = E [ −min { πθ(a|s) πk(a|s) Aπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Aπk(s, a) }] ,
LcP3O(θ) = E [ max { πθ(a|s) πk(a|s) Acπk(s, a), clip ( πθ(a|s) πθk(a|s) , 1− ϵ, 1 + ϵ ) Acπk(s, a) } + (1− γ) (Jc(πk)− b) ] , the notation E[·] is short for Es∼dρ0πk (·),a∼πk(·|s)[·]. All of the term in (31) can be estimated according to the samples selected by πk.
For each round, P3O chooses the parameter adaptively according to the following rule: κ← min {ρκ, κmax} , (32) where ρ > 1 and κmax is a positive scalar.
Finally, P3O updates the policy parameter as follows,
θ ← θ − η ∂ ∂θ LP3O(θ). (33)
E.7 IPO (LIU ET AL., 2020)
IPO considers the objective with logarithmic barrier functions (Nemirovski, 2004) to learn the safe policy. Concretely, IPO considers the following way to update policy,
πk+1 = arg max π∈Πθ
{ Lrclip(π) + ϕ(π) } , (34)
where the clip objective is Lrclip(π), and ϕ(π) is the logarithm barrier function with respect to the CMDP problem,
ϕ(π) = 1
m log
( b− Jc(π) ) , (35)
where m > 0 is a hyper-parameter that needs to be tuned.
E.8 CPPO-PID (STOOKE ET AL., 2020B)
CPPO-PID (Stooke et al., 2020b) also considers the primal-dual policy optimization method to solve the CMP problem,
(π⋆, λ⋆) = argmin λ≥0 max π∈Πθ
{J(π)− λ(Jc(π)− b)} . (36)
The main difference between CPPO-PID and previous Lagrangian methods is that replacing the update rule (??), CPPO-PID considers PID control technique (Johnson & Moradi, 2005) to update Lagrange multiplier.
First, CPPO-PID updates the parameter θ as follows,
θk+1 = θk + η ∂
∂θ
( Lrclip(πθ)− λ ( Lcclip(πθ)− b ) )∣∣∣ θ=θk,λ=λk , (37)
where η > 0 is step-size.
Then, CPPO-PID updates the parameter λ according to PID method (Algorithm 2 in (Stooke et al., 2020b)),
λk+1 = PID(KP ,KI ,KD, πθk , n), (38) where KP ,KI ,KD are the parameter needed to be tuned, and n is the iteration number.
F CORRECTNESS VERIFICATION
F.1 TASK LIST
Below are four safety environments that provide interesting tasks for Safe RL. Among them, three are popular safety environments, MuJoCo-Velocity (Todorov et al., 2012), Safety-Gym (Ray et al., 2019a), Bullet-Safety-Gym (Gronauer, 2022). We test the 8 algorithms on 26 tasks in these safety environments to verified our re-implementations on TrustDeHands. The complete list of these tasks is shown in table 17.
F.2 TASK PERFORMANCE
we show the performance of our re-implementations on the tasks in 18, 19, 20, 21, 22, 23. | 1. What is the main contribution of the paper regarding safe RL tasks with dexterous robotic hands?
2. What are the strengths and weaknesses of the proposed benchmark and algorithm implementations?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Do you have any questions or concerns regarding the paper, such as task design, motivation, comparisons, and environment support? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This main contribution is that it proposes a set of safe RL tasks with an emphasis on using dexterous robotic hands in Isaac Gym. While it's heading in an important direction, the quality of the paper is rather unsatisfying.
Strengths And Weaknesses
Strength:
Safe RL is important in practical RL applications. How to control dexterous hands is also an important research challenge in the robotics community. The paper chooses a good direction to dive into.
The paper reimplements many RL algorithms for the benchmark. I appreciate the amount of effort put into this.
Weaknesses:
The paper writing can be improved substantially. The logic in many paragraphs is not clear. Many descriptions are vague, unclear, or even inaccurate. For example, in Section 3.3, it says TrustDeHands has 10+ tasks, but I can only find the description of six tasks in the paper. In section 3.5, Becasue it is very difficult to match the real dynamics of the flying hand, the TrustDeHands provide a way to reduce the reality gap by adjusting the dynamics and physics parameters of the arm, which simplifies the deployment process from simulation to the real world applications.. This is not the paper's contribution. Any physics simulator allows users to set or change the dynamics parameters. And simply doing this does not simplify any sim-to-real deployment process.
The motivation for designing a new benchmark on safe RL is unclear to me. The introduction did not do a good job in motivating this properly. It somewhat says safe RL is important, but why do we need a new benchmark? Why cannot we just use existing safe RL environments? Why do we have to do safe RL with dexterous hands? If it's about the algorithm implementation, then it's not part of the environments, so is the paper trying to say that they provide a set of benchmark implementations of different safe RL algorithms? I would encourage the authors to separate the discussions on the algorithms and the environments/tasks.
The benchmark tasks are not well-designed. "Hand over wall" and "Hand up wall" are very similar tasks. Are there any design principles that the authors use when choosing the tasks? Otherwise, one can easily spawn many "different tasks" by changing how the wall looks like. The task of "Clean House" looks too different from a real-world scenario.
The details of the task descriptions are missing, especially what things are randomized and how much randomization is used during a reset in each task. This is critical to understand how difficult the tasks are.
The algorithm implementations are tested on some previous public benchmarks, but the paper does not seem to compare their implementations with other SOTA implementations. What I mean is that the paper seems only to provide the performance of the algorithms on their implementation but does not compare it to the performance achieved by other well-known implementations of the same algorithms.
The paper says that there are 10+ tasks, but I only found six tasks in the paper. The paper also says it supports fix dexterous hands and eight different robot arms as shown in Figure 3. However, the pictures of the hands and arms used in Figure 3 seem to be pictures of the real products. I find it a bit hard to be convinced that all these are well-supported in the benchmark if I only see Figure 3. Can authors provide pictures of the actual environments with these hands and arms?
The introduction claims that the benchmark provides contact force information that can be used as part of policy input. I don't seem to find any detailed description of how the contact force information is provided/implemented in Isaac Gym. As far as I know, the public version of Isaac Gym does not seem to support querying the individual contact information between two objects.
I am not sure why the authors attach the full paper again in the supplementary materials. I would suggest the authors put some videos of the benchmark tasks in the supplementary materials so that reviewers can get a better sense of the tasks.
Clarity, Quality, Novelty And Reproducibility
Paper writing needs a substantial amount of work. |
ICLR | Title
HR-TD: A Regularized TD Method to Avoid Over-Generalization
Abstract
Temporal Difference learning with function approximation has been widely used recently and has led to several successful results. However, compared with the original tabular-based methods, one major drawback of temporal difference learning with neural networks and other function approximators is that they tend to over-generalize across temporally successive states, resulting in slow convergence and even instability. In this work, we propose a novel TD learning method, Hadamard product Regularized TD (HR-TD), to reduce over-generalization and thus leads to faster convergence. This approach can be easily applied to both linear and nonlinear function approximators. HR-TD is evaluated on several linear and nonlinear benchmark domains, where we show improvement in learning behavior and performance.
1 INTRODUCTION
Temporal Difference (TD) learning is one of the most important paradigms in Reinforcement Learning (Sutton and Barto, 1998). Techniques based on combining TD learning with nonlinear function approximators and stochastic gradient descent, such as deep networks, have led to significant breakthroughs in large-scale problems to which these methods can be applied (Mnih et al., 2015; Silver et al., 2016; Schulman et al., 2015).
At its heart, the TD learning update is straightforward. v(s) estimates the value of being in a state s. After an action a that transitions the agent from s to next state s′, v(s) is altered to be closer to the (discounted) estimated value of s′, v(s′) (plus any received reward, r). The difference between these estimated values is called the temporal difference error (TD error) and is typically denoted as δ. Formally, δ = r + γv(s′)− v(s), where γ is the discount factor, and r + γv(s′) is known as the TD target.
When states are represented individually (the tabular case), v(s) can be altered independently from v(s′) using the update rule v(s) ← v(s) + αδ, where α is the learning rate. In fully deterministic environments, α can be set to 1, thus causing v(s) to change all the way to the TD target. Otherwise, in a stochastic environment, α is set less than 1 so that v(s) only moves part of the way towards the TD target, thus avoiding over-generalization from a single example. When, on the other hand, states are represented with a function approximator, as is necessary in large or continuous environments, v(s) can no longer be updated independently from v(s′). That is because s and s′ are likely to be similar (assuming actions have local effects), any change to v(s) is likely to also alter v(s′). While such generalization is desirable in principle, it also has the unintended consequence of changing the TD target, which in turn can cause the TD update to lead to an increase in the TD error between s and s′. This unintended consequence can be seen as a second form of over-generalization: one that can be much more difficult to avoid.
Past work has identified this form of over-generalization in RL, has observed that it is particularly relevant in methods that use neural network function approximators such as DQN (Mnih et al., 2015), and has proposed initial solutions (Durugkar and Stone, 2017; Pohlen et al., 2018). In this paper, we present a deeper analysis of the reasons for this form of over-generalization and introduce a novel learning algorithm termed HR-TD, based on the recursive proximal mapping formulation of TD learning (Bertsekas, 2011), which offers a mathematical framework for parameter regularization that allows one to control for this form of over-generalization. Empirical results across multiple
domains demonstrate that our novel algorithm learns more efficiently (from fewer samples) than prior approaches.
The rest of the paper is organized as follows. Section 2 offers a brief background on TD learning, the over-generalization problem, and optimization techniques used in the derivation of our algorithm. In Section 3, we discuss the state-of-the-art research in this direction. The motivation and the design of our algorithm are presented in Section 4. Finally, the experimental results of Section 5 validate the effectiveness of the proposed algorithm.
2 BACKGROUND
This section builds on the notation introduced in Section 1 to specify the problem of interest in full detail. We introduce the background for TD learning, over-generalization, and proximal mapping, which are instrumental in the problem formulation and algorithm design.
2.1 REINFORCEMENT LEARNING AND OVER-GENERALIZATION
Reinforcement Learning problems are generally defined as Markov Decision Processes (MDPs). We use the definition and notation as used in Sutton and Barto (2017), unless otherwise specified. In this paper, we focus on domains with large or continuous state spaces such that function approximation is needed. We define the value estimate of state s with parameter θ when following policy π as, vπ(s|θ) = Eπ [ Rt + γRt+1 + γ 2Rt+2 + . . . |St = s ] . Here Rt is the random variable associated with a reward at time t, and rt is used as an instantiation of this random variable. The optimal (true) value function v∗π satisfies the Bellman equation given as v ∗ π(s|θ) = Eπ [Rt + γv∗π(s′|θ)]. During TD learning, the estimated value function is altered to try to make this property hold. In effect, state values are updated by bootstrapping off of the estimated value of the predicted next states.
We focus on 1-step TD methods, i.e., TD(0), that bootstrap from the value of the immediate next state or states in the MDP to learn the value of the current state. The TD error δt(st, st+1|θ) to be minimized is as follows:
δt(st, st+1|θ) = (rt + γvπ(st+1|θ))− vπ(st|θ) In the following, δt(st, st+1|θ) is written as δt for short. When using function approximation and gradient descent to optimize the parameters, the loss to be minimized is the squared TD error. At the t-th time-step, the objective function used in TD learning is LTD = ‖rt+γvπ(st+1|θ)−vπ(st|θ)‖2. Similarly, the optimal action value function Q satisfies the Bellman optimality equation Q∗(st, at|θ) = Rt + γmax
a Q∗(st+1, a|θ). The objective used in Q-Learning is thus LQ =
‖rt + γmaxaQ(st+1, a|θ)−Q(st, at|θ)‖2. The partial derivative of v(st|θ) orQ(st, at|θ) with respect to θ is the direction in which TD learning methods update the parameters. We use gt(st|θ) and gt(st, at|θ) to refer to these vectors. In the linear case, v(st|θ) = θ>t φ(st), where φ(st) are the features of state st. In this case, gt(st, at|θ) is the feature vector φ(st, at), and in general, gt(st, at|θ) = ∂θQ(st, at|θ). It is computed as:
gt(st, at|θ) = ∂Q(st, at|θ)
∂θ θ ← θ + αδtgt(st, at|θ).
We have already briefly alluded to the issue of over-generalization in Section 1. One of the reasons we use function approximation is that we want the values we learn to generalize to similar states. But one of these similar states is likely to be the target of our Bellman equation v(st+1|θ). If the weights that correspond to large or important features in φ(st+1) are strengthened, then the TD error might not decrease as much as it should, or it might even increase. We refer to parameter updates that work against the objective of reducing the TD error as over-generalization.
2.2 PROXIMAL MAPPING FORMULATION OF TD LEARNING
In this section, we introduce the basics of proximal mapping, which provide the mathematical formulation of our algorithm design. A proximal mapping (Parikh and Boyd, 2013) proxf (w) associated
with a convex function f is defined as
proxf (w) = arg minx
( f(x) + 1
2 ‖w − x‖22
) (1)
Such a proximal mapping is typically used after a parameter update step to incorporate constraints on the parameters. Intuitively, the first term f(x) provides incentive to move x in the direction that minimizes f , whereas the second term 12‖w − x‖ 2 2 provides pressure to keep x close to w. If f(x) = 0, then proxf (w) = w, the identity function. f can often be a regularization term to help incorporate prior knowledge. For example, for learning sparse representations, the case of f(x) = β‖x‖1 is particularly important. In this case, the entry-wise proximal operator is:
proxf (w)i = sign(wi)max(|wi|−β, 0)
Proximal methods have been shown to be useful for various reinforcement learning problems, e.g., proximal gradient TD learning (Liu et al., 2015) integrates the proximal method with gradient TD learning (Sutton et al., 2009) using the Legendre-Fenchel convex conjugate function (Boyd and Vandenberghe, 2004), and projected natural actor-critic (Thomas et al., 2013) interprets natural gradient as a special case of proximal mapping. We now introduce the recursive proximal mapping formulation of TD learning algorithm (Bertsekas, 2011). At the t-th iteration, the TD update law solves a recursive proximal mapping, i.e., θt+1 = θt + αtδtgt(st), which is equivalent to
θt+1 = arg min x
{ 〈x,−δtgt(st)〉+ 1
2αt ||x− θt||22
} (2)
It should be noted that Eq. (2) is different from Eq. (1) in that Eq. (1) has an explicit objective function f to optimize. Eq. (2) does not have an explicit objective function, but rather corresponds to a fixed-point equation. In fact, it has been proven that the TD update term δtgt(st) does not optimize any objective function (Maei, 2011). Discussing this in details goes beyond the scope of the paper, and we refer interested readers to (Maei, 2011; Bertsekas, 2011) for a comprehensive discussion of this topic.
3 RELATED WORK
To the best of our knowledge, the closest work to ours to address the over-generalization problem is the Temporal Consistency loss (TC-loss) method (Pohlen et al., 2018) and the constrained TD approach (Durugkar and Stone, 2017).
The TC-loss (Pohlen et al., 2018) aims to minimize the change to the target state by minimizing explicitly a separate loss that measures change in the value of s′, i.e., L ( Vθ(s ′, a′)− Vθt−1(s′, a′) ) . When used in conjunction with a TD loss, it guarantees that the updated estimates adhere to the Bellman operator and thus are temporally consistent. However, there are some drawbacks to this method. Firstly, the asymptotic solution of the TC-loss method is different from the TD solution due to the two separate losses, and the solution property remains unclear. Secondly, each parameter component plays a different role in changing v(s′). For instance, if the component of θ is or close to 0, then this component does not have any impact on changing v(s′). Different parameter components, therefore, should be treated differently according to their impact on the value function changes (or action-value change in case of DQN).
Another recent work in this direction is the constrained TD (CTD) algorithm (Durugkar and Stone, 2017). To avoid the over-generalization among similar sates, CTD tends to alleviate overgeneralization by using the vector rejection technique to diminish the update along the direction of the gradient of the action-value function of the successive state. In other words, the real update is made to be orthogonal to the gradient of the next state. However, the CTD method suffers from the double-sampling problem, which is explained in detail in Appendix A. Moreover, since it mainly uses vector rejection, this method is not straightforward to extend to nonlinear function approximation, such as the DQN network, where over-generalization can be severe. Lastly, if the state representation of st and st+1 are highly similar, as in case of visual environments like Atari games, then the vector rejection causes the update to be almost orthogonal to the computed gradient.
4 HADAMARD PRODUCT REGULARIZED TD
In this section, we analyze the reason for over-generalization and propose a novel algorithm to mitigate it.
4.1 ANALYSIS OF OVER-GENERALIZATION
Consider the update to the parameter θt as follows, with TD error δt, learning rate α and a linear function approximation v(st|θt) with features φ(st) and gradient g(st|θt) = φ(st).
θt+1 = θt + αδ(st, st+1|θt)φ(st) If we substitute the above value for θt+1, the TD error for the same transition after the update is
δ(st, st+1|θt+1) = rt − (θ>t+1φ(st)− γθ>t+1φ(st+1)) = δ(st, st+1|θt)− αδ(st, st+1|θt) ( φ(st) >φ(st)− γφ(st)>φ(st+1) ) ,
and thus δ(st, st+1|θt)− δ(st, st+1|θt+1) = αδ(st, st+1|θt) ( φ(st) >φ(st)− γφ(st)>φ(st+1) ) .
We see above that the decrease in the TD error at t depends on two factors, the inner product of the gradient with features of st, and its inner product with the features of st+1. This decrease will be reduced if φ(st) and φ(st+1) have a large inner product. If this inner product exceeds 1γφ(st)
>φ(st), then in fact the error increases. Thus over-generalization is an effect of a large positive correlation between the update and the features of st+1, especially when contrasted with the correlation of this same update with the features of st.
We are then left with the following question: what kind of weight update can maximize the reduction in δt? Merely minimizing the correlation of the update with φ(st+1) is insufficient, as it might lead to minimizing the correlation with φ(st). This is the issue that Constrained TD (Durugkar and Stone, 2017) faces with its gradient projection approach. Hence, we must also maximize its correlation with φ(st).
To examine this effect, we consider the properties of parameters that we should avoid changing, to the extent possible. Consider the linear value function approximation case: vθ(s) = φ(s)>θ. For example, consider st and st+1 with the features φ(st) = [0, 2, 1], and φ(st+1) = [2, 0, 1]. Then for two different weights, θ1 = [0, 0, 2] and θ2 = [1, 1, 0], we have the same value for both these parameter vectors at both st and st+1, i.e. φ(st)>θ1 = φ(st+1)>θ1 = φ(st)>θ2 = φ(st+1)>θ2 = 2. However, the results of the Hadamard product (◦) of these parameters with the feature vectors are different, i.e.
φ(st) ◦ θ1 = φ(st+1) ◦ θ1 = [0, 0, 2], φ(st) ◦ θ2 = [0, 2, 0], φ(st+1) ◦ θ2 = [2, 0, 0],
where the Hadamard products of θ1 with φ(st) and φ(st+1) are more correlated than those of θ2. An update to the last weight of θ1 will cause the values of both st and st+1 to change, but an update to the second weight of θ2 will affect only st. In fact, unless both the first and the second weights change, st and st+1 do not change simultaneously. In this sense, θ1 tends to cause aggressive generalization across the values of st and st+1, and thus the TD update to θ1 should be regularized more heavily. The Hadamard product of the weights and the features allows us to distinguish between θ1 and θ2 in this way.
Motivated by this observation, we aim to reduce the over-generalization by controlling the weighted feature correlation between the current state g(s)◦θ and the successive state g(s′)◦θ, i.e., Corr(g(s)◦ θ, g(s′) ◦ θ).
4.2 ALGORITHM DESIGN
Given the constraint as shown above, the constrained Mean-Squares Error (MSE) is formulated as
θ∗ = arg min θ
1 2 ||V − vθ||22, s.t. Corr(g(s) ◦ θ, g(s′) ◦ θ) ≤ ρ, (3)
Algorithm 1 Hadamard product Regularized TD (HR-TD) Learning Require: T , αt(learning rate), γ(discount factor), η(initial regularization parameter). Ensure: Initialize θ0.
for t = 1, 2, 3, · · · , T do ηt = η/t Update θt+1 according to Eq. (5). end for
where V is the true value function. Using the recursive proximal mapping with respect to the constrained objective function, the parameter update per-step of Eq. (3) can be written as
θt+1 = arg min θ
{ − θ> ( E[δt]g(st) ) + 1
2αt ||θ − θt||22
} , s.t. Corr(g(st) ◦ θ, g(st+1) ◦ θt) ≤ ρ.
Using Lagrangian duality, it can be reformulated as
θt+1 = arg min θ
{ − θ>(E[δt]g(st)) + 1
2αt ||θ − θt||22 + ηCorr(g(st) ◦ θ, g(st+1) ◦ θt)
} ,
where η is the factor that weights the constraint against the objective. The closed-form solution to the weight update is
θt+1 = θt + αt ( E[δt]g(st)− η(g(st) ◦ g(st+1) ◦ θt) ) (4)
Using sample-based estimation, i.e., using gt(s) (resp. gt(s′)) to estimate g(s) (resp. g(s′)) , and using δt to estimate E[δt], the Eq. (4) becomes
θt+1 = θt + αt ( δtgt(st)− η(gt(st) ◦ gt(st+1) ◦ θt) ) (5)
In the proposed algorithm, if the component of the weights helps decrease the Hadamard product correlation, then it is not penalized. Now the algorithm for value function approximation is formulated as in Algorithm 1, and the algorithm for control is formulated in Algorithm 2.
4.3 HADAMARD PRODUCT REGULARIZED DEEP Q NETWORK
In DQN, the value function is learned by minimizing the following squared Bellman error using SGD and backpropagating the gradients through the parameter θ
LDQN = 1 2 ‖rt + γQ(st+1, at+1|θ′)−Q(st, at|θ)‖2. (6)
Here, θ′ are the parameter of the target network that is periodically updated to match the parameters being trained. The action at+1 is chosen as arg maxaQ(st+1, a|θ′) if we use DQN, and arg maxaQ(st+1, a|θ) if we use Double DQN (DDQN) (Van Hasselt et al., 2016). We use DDQN in experiments as DQN has been shown to over-estimate the target value.
Let φ(st|θ) be the activations of the last hidden layer before the Q-value calculation and θ−1 be the corresponding weights for this layer. The Correlation can be written as Lcorr = Corr(φ(st|θ)◦θ, φ(st+1|θ)◦θt). We do not use the target network when calculating this loss. The loss used in Hadamard regularized DDQN is then an η-weighted mixture of Eq. (6) and this loss
LHR−TD = LDQN + ηLcorr (7)
4.4 THEORETICAL ANALYSIS
In this section, we conduct some initial analysis of Algorithm 1 with linear function approximation. For simplicity, we only discuss the linear case, i.e., ∂vθ(st) = φ(st), ∂vθ(st+1) = φ(st+1). If Algorithm 1 converges, the update of the weights according to Eq. (5) should satisfy the following condition
E[δtφ(st)− η(φ(st+1) ◦ θt ◦ φ(st))] = 0.
Rewriting δt and denoting ∆φt = φ(st)− γφ(st+1), we have
E[φ(st)rt] = E[φ(st)∆φ>t + ηM ]θt,
where M = E[Diag(φ(st) ◦ φ(st+1))] = E[Diag(φ(st)φ>(st+1))]. Thus we have
E[φ(st)(∆φ(st))> + ηM ] = E[φ(st)φ>(st)− γφ(st)φ>(st+1) + ηDiag(φ(st)φ>(st+1))]
If we set η → γ, we observe that the second and third terms in the RHS above cancel out in the diagonal element. Consider the scheme where we initialize η = γ and then reduce it as over the training process. It is equivalent to slowly introducing the discount factor into the error computation. It has been shown (Prokhorov and Wunsch, 1997) that instead of the discount factor γ provided by the MDP, a user-defined time-varying γt can help accelerate the learning process of the original MDP w.r.t γ. This previous work suggests using a small discount factor γt < γ in the beginning, and then increasing γt gradually to γ. HR-TD results in a similar effect without defining a separate γt and its schedule.
5 EXPERIMENTS
We evaluate HR-TD on two classical control problems: Mountain Car and Acrobot using both linear function approximation with Fourier basis features and nonlinear function approximation using Deep Neural Networks. We verify that this algorithm scales to complex domains such as the Atari Learning Environment (Bellemare et al., 2013), by evaluating our approach on the game of Pong. We utilize OpenAI gym (Brockman et al., 2016) to interface our agent with the environments. We compare HR-TD to the baselines by using the following metrics: 1) Accumulated reward per episode. 2) Average change in the target Q value at s′ after every parameter update. For comparison, we consider Q learning and Q learning with TC loss (and DDQN for neural networks).
Based on our analysis, we expect HR-Q learning to begin improving the policy earlier in the learning process, and we expect HR-TD to be able to evaluate the value of a policy just as well as TD. We evaluate the change of the value of the next state as well, and consider whether HR-TD is able to reduce this change as a consequence of the regularization. We note, however, that this quantity is diagnostic in nature, rather than being the true objective. It would definitely be possible to minimize this quantity by making no learning updates whatsoever, but then we would also observe no learning.
5.1 EVALUATION
Before we consider the effect of HR-Q on control tasks, we compare the purely evaluative property of HR-TD. Here, we evaluate a trained policy on the Mountain Car domain. We run this experiment
Algorithm 2 Hadamard product Regularized Q (HR-Q) Network Require: T , αt(learning rate), γ(discount factor), η(initial regularization parameter). Ensure: Initialize θ0.
repeat ηt = η/t Choose at using policy derived from Q (e.g., -greedy) Take at, observe rt, st+1 Add st, at, rt, st+1 to Replay Buffer Sample batch from Buffer and Update θt+1 using backpropagation to minimize Eq. (7). t← t+ 1 until training done
10 times for each method. For each experiment, the policy is executed in the environment for 10000 steps, resetting the agent to one of the start states if it terminates. We learn the value function using TD by sampling a batch of transitions from this dataset and take 10,000 learning steps per run.
The metric we compare is the MSE with the Monte Carlo estimate of the same run, taken over 300,000 transitions. The MSE value for each experiment is calculated by averaging the MSE of the last 10 training steps, to reduce sampling error. Finally, we take the mean of these errors across the 10 runs and present them in Table 1. TD and HR-TD reach roughly the same value for all the runs. TC, however, converges to a different minimum that leads to a very high MSE. This may be because the competing TD and TC objectives in this method cause the learning to destabilize. If we lower the learning rate for TC, then we avoid this behavior but also do not converge in the given max number of training steps.
5.2 NEURAL NETWORKS
We now consider the performance of HR-Q learning when using Neural Networks for function approximation. We consider two domains, Mountain Car and Acrobot, but we do not perform any basis expansion and feed the state values directly into a neural network with a single hidden layer of 64 units.
We compare the performance of HR-Q in Figure 1 and 2, with Q-Learning and Q-learning with TC loss. We use DDQN Van Hasselt et al. (2016) as the underlying algorithm for Q-learning. Details of
the network and hyperparameters are in Appendix B. We take 20 independent runs, with a different seed in each run used to initialize Tensorflow, NumPy, and the OpenAI Gym environment. Each run is taken over 1000 episodes. In both these experiments, we see HR-TD starts to learn a useful policy behavior before either of the other techniques. Interesting to note is that in Fig. 1b, HR-TD learns a state representation that causes the target value to change less than DQN but does not restrict it as much as TC. But in Fig. 2b we see that HR-TD is able to find a representation that is better at keeping the target value separate than TC is. However, in both these cases, the value function that is learned seems to be quickly useful for learning a better policy.
5.3 ATARI
We also validate the applicability of this technique to a more complex domain and a more complex network. We apply the HR-Q to DDQN on the Atari domain to verify that the technique is scalable and that the findings and trends we see in the first two experiments carry forward to this challenging task. We use the network architecture specified in Mnih et al. (2015), and the hyper-parameters for TC as specified in Pohlen et al. (2018). Experimental details are specified in Appendix B. From the results, we see that HR-TD does not interfere with learning on the complex network, and does about as well as DDQN.
5.4 LINEAR FUNCTION APPROXIMATION
Finally, we study HR-TD with the linear function approximation, we look at the Mountain Car domain. We expand the feature space using Fourier basis functions (Konidaris et al., 2011). All methods are trained with an order 6 Fourier basis expansion for Mountain Car Konidaris et al. (2011), which leads to 36 features for Mountain Car. We use a constant learning rate α = 0.01 for all three methods. For HR-TD we initialize the regularization factor η = 0.3. Each episode is run
until we receive an episode termination signal from the Gym wrapper, which is a maximum of 200 steps if the goal is not reached. We show the learning curves for 1000 episodes, averaged over 20 independent runs. In Figure 4, we see that HR-Q and TC perform better than Q-learning. HR-Q also shows a more stable updates (changes value of next state less) than Q learning, and comparable to Q-learning with the added TC loss over the course of training.
6 CONCLUSION
In this paper, we analyze the problem of over-generalization in TD learning with function approximation. This analysis points to the potential pitfalls of over-generalization in TD-learning. Based on the analysis, we propose a novel regularization scheme based on the Hadamard product. We also show that with the right weight on the regularization, the solution of this method is the same as that of TD. Finally, we experimentally validate the effectiveness of our algorithm on benchmarks of varying complexity.
A PROBLEM WITH CTD: DOUBLE SAMPLING PROBLEM
Double sampling comes into effect whenever we need the product of two expectations. If an expression contains 3 expectations we will need three independent samples. Below we will first write out why residual gradients have a double sampling problem and why TD doesn’t. Then we shall show why CTD has this problem, and might actually suffer from a triple sampling problem. Note that the double-sampling problem only exists in stochastic MDP problems. In a Deterministic MDP, double sampling will not be an issue.
δ(s) = r(s, a, s′) + γV (s′|θ)− V (s|θ)
L = 1 2 E[‖δ(s)‖]2
∂L ∂θ = E [δ(s)(g(s)− g(s′))] . . .Residual Gradient
= E[δ(s)]E[g(s)− g(s′)] ∂L ∂θ = Eδ(s)g(s) . . .TD update ∂L ∂θ = E [ δ(s)g(s)− < g(s), g(s ′) > ‖g(s′)‖22 g(s′) ] . . .Constrained TD update
In the constrained TD update, the first term is the regular TD update, which has no double-sampling issues. However, the second term, −<g(s),g(s
′)> ‖g(s′)‖22 g(s′), involves computing s′ in multi-places, and will need to sample multiple times to have an unbiased estimation, and thus have the doublesampling problems.
B EXPERIMENT DETAILS
B.1 LINEAR FUNCTION APPROXIMATION
Mountain Car: Basis Function: Fourier Basis, order 6 Max steps per episode: 200 Number of episodes: 500
B.2 MLP-DQN
Layers: [64], Activation: ReLU, Optimizer: Adam Replay Memory size: 50000 batch size: 32 minimum (for exploration) : 0.01 is decayed over 5% of the episodes η is decayed as ηt = ηT+1 , where T is the episode number
B.2.1 MOUNTAIN CAR
Max steps per episode: 200, Number of episodes: 1000
B.2.2 ACROBOT
Max steps per episode: 500, Number of episodes: 200
B.3 DQN FOR ATARI
Network: DQN architecture from Mnih et al. (2015), Optimizer: Adam Replay Memory size: 100000, minimum : 0.01 is decayed over 5% of the frames Training Frames: 10M game frames, fed 4 at a time to network. (2.5 M agent steps) η is decayed as ηt = ηT+1 , where T is integer value of t 5000 , and t is the iteration number.
C POLICY EVALUATION LEARNING CURVES | 1. What is the focus of the paper regarding overgeneralization in deep reinforcement learning?
2. What are the strengths and weaknesses of the proposed regularization scheme?
3. How does the reviewer assess the significance of the paper's contribution to the field?
4. What are some minor comments regarding the paper's content? | Review | Review
The paper considers the problem of overgeneralization between adjacent states of the one-step temporal difference error, when using function approximation. The authors suggest an explicit regularization scheme based on the correlation between the respective features, which reduces to penalizing the Hadamard product.
The paper has some interesting ideas, and the problem is very relevant to deep RL. Having a more principled approach to target networks would be nice. I have some concerns though:
* The key motivation is not convincing. Our goal with representation learning for deep RL is to have meaningful generalization between similar states. The current work essentially tries to reduce this correlation for the sake of interim optimization benefits of the one-step update.
* The back and forth between fixed linear features and non-linear learned features needs to be polished. The analysis is usually given for the linear case, but in the deep setting the features are replaced with gradients. Also, the relationship with target networks, as well as multi-step updates (e.g. A3C) needs to be mentioned early, as these are the main ways of dealing with or bypassing the issue the authors are describing.
* The empirical validation is very weak -- two toy domains, and Pong, the easiest Atari game, so unfortunately there isn’t enough evidence to suggest that the approach would be impactful in practice.
Minor comments:
* there must be a max in the definition of v* somewhere
* V_pi is usually used for the true value function, rather than the estimate
* Sections 2.2 and 4.2 should be better bridged
* The relationship with the discount factor just before Section 5 is interesting, but quite hand-wavy -- the second term only concerns the diagonal elements, and the schedule on gamma would be replaced by a schedule on eta. |
ICLR | Title
HR-TD: A Regularized TD Method to Avoid Over-Generalization
Abstract
Temporal Difference learning with function approximation has been widely used recently and has led to several successful results. However, compared with the original tabular-based methods, one major drawback of temporal difference learning with neural networks and other function approximators is that they tend to over-generalize across temporally successive states, resulting in slow convergence and even instability. In this work, we propose a novel TD learning method, Hadamard product Regularized TD (HR-TD), to reduce over-generalization and thus leads to faster convergence. This approach can be easily applied to both linear and nonlinear function approximators. HR-TD is evaluated on several linear and nonlinear benchmark domains, where we show improvement in learning behavior and performance.
1 INTRODUCTION
Temporal Difference (TD) learning is one of the most important paradigms in Reinforcement Learning (Sutton and Barto, 1998). Techniques based on combining TD learning with nonlinear function approximators and stochastic gradient descent, such as deep networks, have led to significant breakthroughs in large-scale problems to which these methods can be applied (Mnih et al., 2015; Silver et al., 2016; Schulman et al., 2015).
At its heart, the TD learning update is straightforward. v(s) estimates the value of being in a state s. After an action a that transitions the agent from s to next state s′, v(s) is altered to be closer to the (discounted) estimated value of s′, v(s′) (plus any received reward, r). The difference between these estimated values is called the temporal difference error (TD error) and is typically denoted as δ. Formally, δ = r + γv(s′)− v(s), where γ is the discount factor, and r + γv(s′) is known as the TD target.
When states are represented individually (the tabular case), v(s) can be altered independently from v(s′) using the update rule v(s) ← v(s) + αδ, where α is the learning rate. In fully deterministic environments, α can be set to 1, thus causing v(s) to change all the way to the TD target. Otherwise, in a stochastic environment, α is set less than 1 so that v(s) only moves part of the way towards the TD target, thus avoiding over-generalization from a single example. When, on the other hand, states are represented with a function approximator, as is necessary in large or continuous environments, v(s) can no longer be updated independently from v(s′). That is because s and s′ are likely to be similar (assuming actions have local effects), any change to v(s) is likely to also alter v(s′). While such generalization is desirable in principle, it also has the unintended consequence of changing the TD target, which in turn can cause the TD update to lead to an increase in the TD error between s and s′. This unintended consequence can be seen as a second form of over-generalization: one that can be much more difficult to avoid.
Past work has identified this form of over-generalization in RL, has observed that it is particularly relevant in methods that use neural network function approximators such as DQN (Mnih et al., 2015), and has proposed initial solutions (Durugkar and Stone, 2017; Pohlen et al., 2018). In this paper, we present a deeper analysis of the reasons for this form of over-generalization and introduce a novel learning algorithm termed HR-TD, based on the recursive proximal mapping formulation of TD learning (Bertsekas, 2011), which offers a mathematical framework for parameter regularization that allows one to control for this form of over-generalization. Empirical results across multiple
domains demonstrate that our novel algorithm learns more efficiently (from fewer samples) than prior approaches.
The rest of the paper is organized as follows. Section 2 offers a brief background on TD learning, the over-generalization problem, and optimization techniques used in the derivation of our algorithm. In Section 3, we discuss the state-of-the-art research in this direction. The motivation and the design of our algorithm are presented in Section 4. Finally, the experimental results of Section 5 validate the effectiveness of the proposed algorithm.
2 BACKGROUND
This section builds on the notation introduced in Section 1 to specify the problem of interest in full detail. We introduce the background for TD learning, over-generalization, and proximal mapping, which are instrumental in the problem formulation and algorithm design.
2.1 REINFORCEMENT LEARNING AND OVER-GENERALIZATION
Reinforcement Learning problems are generally defined as Markov Decision Processes (MDPs). We use the definition and notation as used in Sutton and Barto (2017), unless otherwise specified. In this paper, we focus on domains with large or continuous state spaces such that function approximation is needed. We define the value estimate of state s with parameter θ when following policy π as, vπ(s|θ) = Eπ [ Rt + γRt+1 + γ 2Rt+2 + . . . |St = s ] . Here Rt is the random variable associated with a reward at time t, and rt is used as an instantiation of this random variable. The optimal (true) value function v∗π satisfies the Bellman equation given as v ∗ π(s|θ) = Eπ [Rt + γv∗π(s′|θ)]. During TD learning, the estimated value function is altered to try to make this property hold. In effect, state values are updated by bootstrapping off of the estimated value of the predicted next states.
We focus on 1-step TD methods, i.e., TD(0), that bootstrap from the value of the immediate next state or states in the MDP to learn the value of the current state. The TD error δt(st, st+1|θ) to be minimized is as follows:
δt(st, st+1|θ) = (rt + γvπ(st+1|θ))− vπ(st|θ) In the following, δt(st, st+1|θ) is written as δt for short. When using function approximation and gradient descent to optimize the parameters, the loss to be minimized is the squared TD error. At the t-th time-step, the objective function used in TD learning is LTD = ‖rt+γvπ(st+1|θ)−vπ(st|θ)‖2. Similarly, the optimal action value function Q satisfies the Bellman optimality equation Q∗(st, at|θ) = Rt + γmax
a Q∗(st+1, a|θ). The objective used in Q-Learning is thus LQ =
‖rt + γmaxaQ(st+1, a|θ)−Q(st, at|θ)‖2. The partial derivative of v(st|θ) orQ(st, at|θ) with respect to θ is the direction in which TD learning methods update the parameters. We use gt(st|θ) and gt(st, at|θ) to refer to these vectors. In the linear case, v(st|θ) = θ>t φ(st), where φ(st) are the features of state st. In this case, gt(st, at|θ) is the feature vector φ(st, at), and in general, gt(st, at|θ) = ∂θQ(st, at|θ). It is computed as:
gt(st, at|θ) = ∂Q(st, at|θ)
∂θ θ ← θ + αδtgt(st, at|θ).
We have already briefly alluded to the issue of over-generalization in Section 1. One of the reasons we use function approximation is that we want the values we learn to generalize to similar states. But one of these similar states is likely to be the target of our Bellman equation v(st+1|θ). If the weights that correspond to large or important features in φ(st+1) are strengthened, then the TD error might not decrease as much as it should, or it might even increase. We refer to parameter updates that work against the objective of reducing the TD error as over-generalization.
2.2 PROXIMAL MAPPING FORMULATION OF TD LEARNING
In this section, we introduce the basics of proximal mapping, which provide the mathematical formulation of our algorithm design. A proximal mapping (Parikh and Boyd, 2013) proxf (w) associated
with a convex function f is defined as
proxf (w) = arg minx
( f(x) + 1
2 ‖w − x‖22
) (1)
Such a proximal mapping is typically used after a parameter update step to incorporate constraints on the parameters. Intuitively, the first term f(x) provides incentive to move x in the direction that minimizes f , whereas the second term 12‖w − x‖ 2 2 provides pressure to keep x close to w. If f(x) = 0, then proxf (w) = w, the identity function. f can often be a regularization term to help incorporate prior knowledge. For example, for learning sparse representations, the case of f(x) = β‖x‖1 is particularly important. In this case, the entry-wise proximal operator is:
proxf (w)i = sign(wi)max(|wi|−β, 0)
Proximal methods have been shown to be useful for various reinforcement learning problems, e.g., proximal gradient TD learning (Liu et al., 2015) integrates the proximal method with gradient TD learning (Sutton et al., 2009) using the Legendre-Fenchel convex conjugate function (Boyd and Vandenberghe, 2004), and projected natural actor-critic (Thomas et al., 2013) interprets natural gradient as a special case of proximal mapping. We now introduce the recursive proximal mapping formulation of TD learning algorithm (Bertsekas, 2011). At the t-th iteration, the TD update law solves a recursive proximal mapping, i.e., θt+1 = θt + αtδtgt(st), which is equivalent to
θt+1 = arg min x
{ 〈x,−δtgt(st)〉+ 1
2αt ||x− θt||22
} (2)
It should be noted that Eq. (2) is different from Eq. (1) in that Eq. (1) has an explicit objective function f to optimize. Eq. (2) does not have an explicit objective function, but rather corresponds to a fixed-point equation. In fact, it has been proven that the TD update term δtgt(st) does not optimize any objective function (Maei, 2011). Discussing this in details goes beyond the scope of the paper, and we refer interested readers to (Maei, 2011; Bertsekas, 2011) for a comprehensive discussion of this topic.
3 RELATED WORK
To the best of our knowledge, the closest work to ours to address the over-generalization problem is the Temporal Consistency loss (TC-loss) method (Pohlen et al., 2018) and the constrained TD approach (Durugkar and Stone, 2017).
The TC-loss (Pohlen et al., 2018) aims to minimize the change to the target state by minimizing explicitly a separate loss that measures change in the value of s′, i.e., L ( Vθ(s ′, a′)− Vθt−1(s′, a′) ) . When used in conjunction with a TD loss, it guarantees that the updated estimates adhere to the Bellman operator and thus are temporally consistent. However, there are some drawbacks to this method. Firstly, the asymptotic solution of the TC-loss method is different from the TD solution due to the two separate losses, and the solution property remains unclear. Secondly, each parameter component plays a different role in changing v(s′). For instance, if the component of θ is or close to 0, then this component does not have any impact on changing v(s′). Different parameter components, therefore, should be treated differently according to their impact on the value function changes (or action-value change in case of DQN).
Another recent work in this direction is the constrained TD (CTD) algorithm (Durugkar and Stone, 2017). To avoid the over-generalization among similar sates, CTD tends to alleviate overgeneralization by using the vector rejection technique to diminish the update along the direction of the gradient of the action-value function of the successive state. In other words, the real update is made to be orthogonal to the gradient of the next state. However, the CTD method suffers from the double-sampling problem, which is explained in detail in Appendix A. Moreover, since it mainly uses vector rejection, this method is not straightforward to extend to nonlinear function approximation, such as the DQN network, where over-generalization can be severe. Lastly, if the state representation of st and st+1 are highly similar, as in case of visual environments like Atari games, then the vector rejection causes the update to be almost orthogonal to the computed gradient.
4 HADAMARD PRODUCT REGULARIZED TD
In this section, we analyze the reason for over-generalization and propose a novel algorithm to mitigate it.
4.1 ANALYSIS OF OVER-GENERALIZATION
Consider the update to the parameter θt as follows, with TD error δt, learning rate α and a linear function approximation v(st|θt) with features φ(st) and gradient g(st|θt) = φ(st).
θt+1 = θt + αδ(st, st+1|θt)φ(st) If we substitute the above value for θt+1, the TD error for the same transition after the update is
δ(st, st+1|θt+1) = rt − (θ>t+1φ(st)− γθ>t+1φ(st+1)) = δ(st, st+1|θt)− αδ(st, st+1|θt) ( φ(st) >φ(st)− γφ(st)>φ(st+1) ) ,
and thus δ(st, st+1|θt)− δ(st, st+1|θt+1) = αδ(st, st+1|θt) ( φ(st) >φ(st)− γφ(st)>φ(st+1) ) .
We see above that the decrease in the TD error at t depends on two factors, the inner product of the gradient with features of st, and its inner product with the features of st+1. This decrease will be reduced if φ(st) and φ(st+1) have a large inner product. If this inner product exceeds 1γφ(st)
>φ(st), then in fact the error increases. Thus over-generalization is an effect of a large positive correlation between the update and the features of st+1, especially when contrasted with the correlation of this same update with the features of st.
We are then left with the following question: what kind of weight update can maximize the reduction in δt? Merely minimizing the correlation of the update with φ(st+1) is insufficient, as it might lead to minimizing the correlation with φ(st). This is the issue that Constrained TD (Durugkar and Stone, 2017) faces with its gradient projection approach. Hence, we must also maximize its correlation with φ(st).
To examine this effect, we consider the properties of parameters that we should avoid changing, to the extent possible. Consider the linear value function approximation case: vθ(s) = φ(s)>θ. For example, consider st and st+1 with the features φ(st) = [0, 2, 1], and φ(st+1) = [2, 0, 1]. Then for two different weights, θ1 = [0, 0, 2] and θ2 = [1, 1, 0], we have the same value for both these parameter vectors at both st and st+1, i.e. φ(st)>θ1 = φ(st+1)>θ1 = φ(st)>θ2 = φ(st+1)>θ2 = 2. However, the results of the Hadamard product (◦) of these parameters with the feature vectors are different, i.e.
φ(st) ◦ θ1 = φ(st+1) ◦ θ1 = [0, 0, 2], φ(st) ◦ θ2 = [0, 2, 0], φ(st+1) ◦ θ2 = [2, 0, 0],
where the Hadamard products of θ1 with φ(st) and φ(st+1) are more correlated than those of θ2. An update to the last weight of θ1 will cause the values of both st and st+1 to change, but an update to the second weight of θ2 will affect only st. In fact, unless both the first and the second weights change, st and st+1 do not change simultaneously. In this sense, θ1 tends to cause aggressive generalization across the values of st and st+1, and thus the TD update to θ1 should be regularized more heavily. The Hadamard product of the weights and the features allows us to distinguish between θ1 and θ2 in this way.
Motivated by this observation, we aim to reduce the over-generalization by controlling the weighted feature correlation between the current state g(s)◦θ and the successive state g(s′)◦θ, i.e., Corr(g(s)◦ θ, g(s′) ◦ θ).
4.2 ALGORITHM DESIGN
Given the constraint as shown above, the constrained Mean-Squares Error (MSE) is formulated as
θ∗ = arg min θ
1 2 ||V − vθ||22, s.t. Corr(g(s) ◦ θ, g(s′) ◦ θ) ≤ ρ, (3)
Algorithm 1 Hadamard product Regularized TD (HR-TD) Learning Require: T , αt(learning rate), γ(discount factor), η(initial regularization parameter). Ensure: Initialize θ0.
for t = 1, 2, 3, · · · , T do ηt = η/t Update θt+1 according to Eq. (5). end for
where V is the true value function. Using the recursive proximal mapping with respect to the constrained objective function, the parameter update per-step of Eq. (3) can be written as
θt+1 = arg min θ
{ − θ> ( E[δt]g(st) ) + 1
2αt ||θ − θt||22
} , s.t. Corr(g(st) ◦ θ, g(st+1) ◦ θt) ≤ ρ.
Using Lagrangian duality, it can be reformulated as
θt+1 = arg min θ
{ − θ>(E[δt]g(st)) + 1
2αt ||θ − θt||22 + ηCorr(g(st) ◦ θ, g(st+1) ◦ θt)
} ,
where η is the factor that weights the constraint against the objective. The closed-form solution to the weight update is
θt+1 = θt + αt ( E[δt]g(st)− η(g(st) ◦ g(st+1) ◦ θt) ) (4)
Using sample-based estimation, i.e., using gt(s) (resp. gt(s′)) to estimate g(s) (resp. g(s′)) , and using δt to estimate E[δt], the Eq. (4) becomes
θt+1 = θt + αt ( δtgt(st)− η(gt(st) ◦ gt(st+1) ◦ θt) ) (5)
In the proposed algorithm, if the component of the weights helps decrease the Hadamard product correlation, then it is not penalized. Now the algorithm for value function approximation is formulated as in Algorithm 1, and the algorithm for control is formulated in Algorithm 2.
4.3 HADAMARD PRODUCT REGULARIZED DEEP Q NETWORK
In DQN, the value function is learned by minimizing the following squared Bellman error using SGD and backpropagating the gradients through the parameter θ
LDQN = 1 2 ‖rt + γQ(st+1, at+1|θ′)−Q(st, at|θ)‖2. (6)
Here, θ′ are the parameter of the target network that is periodically updated to match the parameters being trained. The action at+1 is chosen as arg maxaQ(st+1, a|θ′) if we use DQN, and arg maxaQ(st+1, a|θ) if we use Double DQN (DDQN) (Van Hasselt et al., 2016). We use DDQN in experiments as DQN has been shown to over-estimate the target value.
Let φ(st|θ) be the activations of the last hidden layer before the Q-value calculation and θ−1 be the corresponding weights for this layer. The Correlation can be written as Lcorr = Corr(φ(st|θ)◦θ, φ(st+1|θ)◦θt). We do not use the target network when calculating this loss. The loss used in Hadamard regularized DDQN is then an η-weighted mixture of Eq. (6) and this loss
LHR−TD = LDQN + ηLcorr (7)
4.4 THEORETICAL ANALYSIS
In this section, we conduct some initial analysis of Algorithm 1 with linear function approximation. For simplicity, we only discuss the linear case, i.e., ∂vθ(st) = φ(st), ∂vθ(st+1) = φ(st+1). If Algorithm 1 converges, the update of the weights according to Eq. (5) should satisfy the following condition
E[δtφ(st)− η(φ(st+1) ◦ θt ◦ φ(st))] = 0.
Rewriting δt and denoting ∆φt = φ(st)− γφ(st+1), we have
E[φ(st)rt] = E[φ(st)∆φ>t + ηM ]θt,
where M = E[Diag(φ(st) ◦ φ(st+1))] = E[Diag(φ(st)φ>(st+1))]. Thus we have
E[φ(st)(∆φ(st))> + ηM ] = E[φ(st)φ>(st)− γφ(st)φ>(st+1) + ηDiag(φ(st)φ>(st+1))]
If we set η → γ, we observe that the second and third terms in the RHS above cancel out in the diagonal element. Consider the scheme where we initialize η = γ and then reduce it as over the training process. It is equivalent to slowly introducing the discount factor into the error computation. It has been shown (Prokhorov and Wunsch, 1997) that instead of the discount factor γ provided by the MDP, a user-defined time-varying γt can help accelerate the learning process of the original MDP w.r.t γ. This previous work suggests using a small discount factor γt < γ in the beginning, and then increasing γt gradually to γ. HR-TD results in a similar effect without defining a separate γt and its schedule.
5 EXPERIMENTS
We evaluate HR-TD on two classical control problems: Mountain Car and Acrobot using both linear function approximation with Fourier basis features and nonlinear function approximation using Deep Neural Networks. We verify that this algorithm scales to complex domains such as the Atari Learning Environment (Bellemare et al., 2013), by evaluating our approach on the game of Pong. We utilize OpenAI gym (Brockman et al., 2016) to interface our agent with the environments. We compare HR-TD to the baselines by using the following metrics: 1) Accumulated reward per episode. 2) Average change in the target Q value at s′ after every parameter update. For comparison, we consider Q learning and Q learning with TC loss (and DDQN for neural networks).
Based on our analysis, we expect HR-Q learning to begin improving the policy earlier in the learning process, and we expect HR-TD to be able to evaluate the value of a policy just as well as TD. We evaluate the change of the value of the next state as well, and consider whether HR-TD is able to reduce this change as a consequence of the regularization. We note, however, that this quantity is diagnostic in nature, rather than being the true objective. It would definitely be possible to minimize this quantity by making no learning updates whatsoever, but then we would also observe no learning.
5.1 EVALUATION
Before we consider the effect of HR-Q on control tasks, we compare the purely evaluative property of HR-TD. Here, we evaluate a trained policy on the Mountain Car domain. We run this experiment
Algorithm 2 Hadamard product Regularized Q (HR-Q) Network Require: T , αt(learning rate), γ(discount factor), η(initial regularization parameter). Ensure: Initialize θ0.
repeat ηt = η/t Choose at using policy derived from Q (e.g., -greedy) Take at, observe rt, st+1 Add st, at, rt, st+1 to Replay Buffer Sample batch from Buffer and Update θt+1 using backpropagation to minimize Eq. (7). t← t+ 1 until training done
10 times for each method. For each experiment, the policy is executed in the environment for 10000 steps, resetting the agent to one of the start states if it terminates. We learn the value function using TD by sampling a batch of transitions from this dataset and take 10,000 learning steps per run.
The metric we compare is the MSE with the Monte Carlo estimate of the same run, taken over 300,000 transitions. The MSE value for each experiment is calculated by averaging the MSE of the last 10 training steps, to reduce sampling error. Finally, we take the mean of these errors across the 10 runs and present them in Table 1. TD and HR-TD reach roughly the same value for all the runs. TC, however, converges to a different minimum that leads to a very high MSE. This may be because the competing TD and TC objectives in this method cause the learning to destabilize. If we lower the learning rate for TC, then we avoid this behavior but also do not converge in the given max number of training steps.
5.2 NEURAL NETWORKS
We now consider the performance of HR-Q learning when using Neural Networks for function approximation. We consider two domains, Mountain Car and Acrobot, but we do not perform any basis expansion and feed the state values directly into a neural network with a single hidden layer of 64 units.
We compare the performance of HR-Q in Figure 1 and 2, with Q-Learning and Q-learning with TC loss. We use DDQN Van Hasselt et al. (2016) as the underlying algorithm for Q-learning. Details of
the network and hyperparameters are in Appendix B. We take 20 independent runs, with a different seed in each run used to initialize Tensorflow, NumPy, and the OpenAI Gym environment. Each run is taken over 1000 episodes. In both these experiments, we see HR-TD starts to learn a useful policy behavior before either of the other techniques. Interesting to note is that in Fig. 1b, HR-TD learns a state representation that causes the target value to change less than DQN but does not restrict it as much as TC. But in Fig. 2b we see that HR-TD is able to find a representation that is better at keeping the target value separate than TC is. However, in both these cases, the value function that is learned seems to be quickly useful for learning a better policy.
5.3 ATARI
We also validate the applicability of this technique to a more complex domain and a more complex network. We apply the HR-Q to DDQN on the Atari domain to verify that the technique is scalable and that the findings and trends we see in the first two experiments carry forward to this challenging task. We use the network architecture specified in Mnih et al. (2015), and the hyper-parameters for TC as specified in Pohlen et al. (2018). Experimental details are specified in Appendix B. From the results, we see that HR-TD does not interfere with learning on the complex network, and does about as well as DDQN.
5.4 LINEAR FUNCTION APPROXIMATION
Finally, we study HR-TD with the linear function approximation, we look at the Mountain Car domain. We expand the feature space using Fourier basis functions (Konidaris et al., 2011). All methods are trained with an order 6 Fourier basis expansion for Mountain Car Konidaris et al. (2011), which leads to 36 features for Mountain Car. We use a constant learning rate α = 0.01 for all three methods. For HR-TD we initialize the regularization factor η = 0.3. Each episode is run
until we receive an episode termination signal from the Gym wrapper, which is a maximum of 200 steps if the goal is not reached. We show the learning curves for 1000 episodes, averaged over 20 independent runs. In Figure 4, we see that HR-Q and TC perform better than Q-learning. HR-Q also shows a more stable updates (changes value of next state less) than Q learning, and comparable to Q-learning with the added TC loss over the course of training.
6 CONCLUSION
In this paper, we analyze the problem of over-generalization in TD learning with function approximation. This analysis points to the potential pitfalls of over-generalization in TD-learning. Based on the analysis, we propose a novel regularization scheme based on the Hadamard product. We also show that with the right weight on the regularization, the solution of this method is the same as that of TD. Finally, we experimentally validate the effectiveness of our algorithm on benchmarks of varying complexity.
A PROBLEM WITH CTD: DOUBLE SAMPLING PROBLEM
Double sampling comes into effect whenever we need the product of two expectations. If an expression contains 3 expectations we will need three independent samples. Below we will first write out why residual gradients have a double sampling problem and why TD doesn’t. Then we shall show why CTD has this problem, and might actually suffer from a triple sampling problem. Note that the double-sampling problem only exists in stochastic MDP problems. In a Deterministic MDP, double sampling will not be an issue.
δ(s) = r(s, a, s′) + γV (s′|θ)− V (s|θ)
L = 1 2 E[‖δ(s)‖]2
∂L ∂θ = E [δ(s)(g(s)− g(s′))] . . .Residual Gradient
= E[δ(s)]E[g(s)− g(s′)] ∂L ∂θ = Eδ(s)g(s) . . .TD update ∂L ∂θ = E [ δ(s)g(s)− < g(s), g(s ′) > ‖g(s′)‖22 g(s′) ] . . .Constrained TD update
In the constrained TD update, the first term is the regular TD update, which has no double-sampling issues. However, the second term, −<g(s),g(s
′)> ‖g(s′)‖22 g(s′), involves computing s′ in multi-places, and will need to sample multiple times to have an unbiased estimation, and thus have the doublesampling problems.
B EXPERIMENT DETAILS
B.1 LINEAR FUNCTION APPROXIMATION
Mountain Car: Basis Function: Fourier Basis, order 6 Max steps per episode: 200 Number of episodes: 500
B.2 MLP-DQN
Layers: [64], Activation: ReLU, Optimizer: Adam Replay Memory size: 50000 batch size: 32 minimum (for exploration) : 0.01 is decayed over 5% of the episodes η is decayed as ηt = ηT+1 , where T is the episode number
B.2.1 MOUNTAIN CAR
Max steps per episode: 200, Number of episodes: 1000
B.2.2 ACROBOT
Max steps per episode: 500, Number of episodes: 200
B.3 DQN FOR ATARI
Network: DQN architecture from Mnih et al. (2015), Optimizer: Adam Replay Memory size: 100000, minimum : 0.01 is decayed over 5% of the frames Training Frames: 10M game frames, fed 4 at a time to network. (2.5 M agent steps) η is decayed as ηt = ηT+1 , where T is integer value of t 5000 , and t is the iteration number.
C POLICY EVALUATION LEARNING CURVES | 1. What is the main contribution of the paper, and how does it address the issue of over-generalization in temporal difference learning?
2. How well does the paper justify the proposed algorithm, and how does it relate to previous work in the field?
3. Are there any concerns regarding the experiments conducted in the paper, such as missing details or lack of significance?
4. How does the paper handle the issue of function approximation, and how does this impact the results?
5. Are there any imprecise parts of the paper that need clarification, such as the definition of the value function or the objective function of TD learning?
6. How does the paper differentiate itself from other approaches in the field, such as Q-learning, and how does it demonstrate its advantage over these methods?
7. Are there any potential biases in the experimental setup or results that could impact the validity of the conclusions drawn from them?
8. How well does the paper discuss the limitations of the proposed approach and its potential applications in different domains? | Review | Review
This paper introduces a variation on temporal difference learning for the function approximation case that attempts to resolve the issue of over-generalization across temporally-successive states. The new approach is applied to both linear and non-linear function approximation, and for prediction and control problems. The algorithmic contribution is demonstrated with a suite of experiments in classic benchmark control domains (Mountain Car and Acrobot), and in Pong.
This paper should be rejected because (1) the algorithm is not well justified either by theory or practice, (2) the paper never clearly demonstrates the existence of problem they are trying to solve (nor differentiates it from the usual problem of generalizing well), (3) the experiments are difficult to understand, missing many details, and generally do not support a significant contribution, and (4) the paper is imprecise and unpolished.
Main argument
The paper does not do a great job of demonstrating that the problem it is trying to solve is a real thing. There is no experiment in this paper that clearly shows how this temporal generalization problem is different from the need to generalize well with function approximation. The paper points to references to establish the existence of the problem, but for example the Durugkar and Stone paper is a workshop paper and the conference version of that paper was rejected from ICLR 2018 and the reviewers highlighted serious issues with the paper—that is not work to build upon. Further the paper under review here claims this problem is most pressing in the non-linear case, but the analysis in section 4.1 is for the linear case.
The resultant algorithm does not seem well justified, and has a different fixed point than TD, but there is no discussion of this other than section 4.4, which does not make clear statements about the correctness of the algorithm or what it converges to. Can you provide a proof or any kind of evidence that the proposed approach is sound, or how it’s fixed point relates to TD?
The experiments do not provide convincing evidence of the correctness of the proposed approach or its utility compared to existing approaches. There are so many missing details it is difficult to draw many conclusions:
1) What was the policy used in exp1 for policy evaluation in MC?
2) Why Fourier basis features?
3) In MC with DQN how did you adjust the parameters and architecture for the MC task?
4) Was the reward in MC and Acrobot -1 per step or something else
5) How did you tune the parameters in the MC and Acrobot experiments?
6) Why so few runs in MC, none of the results presented are significant?
7) Why is the performance so bad in MC?
8) Did you evaluate online learning or do tests with the greedy policy?
9) How did you initialize the value functions and weights?
10) Why did you use experience replay for the linear experiments?
11) IN MC and Acrobot why only a one layer MLP?
Ignoring all that, the results are not convincing. Most of the results in the paper are not statistically significant. The policy evaluation results in MC show little difference to regular TD. The Pong results show DQN is actually better. This makes the reader wonder if the result with DQN on MC and Acrobot are only worse because you did not properly tune DQN for those domains, whereas the default DQN architecture is well tuned for Atari and that is why you method is competitive in the smaller domains.
The differences in the “average change in value plots” are very small if the rewards are -1 per step. Can you provide some context to understand the significance of this difference? In the last experiment linear FA and MC, the step-size is set equal for all methods—this is not a valid comparison. Your method may just work better with alpha = 0.1.
The paper has many imprecise parts, here are a few:
1) The definition of the value function would be approximate not equals unless you specify some properties of the function approximation architecture. Same for the Bellman equation
2) equation 1 of section 2.1 is neither an algorithm or a loss function
3) TD does not minimize the squared TD. Saying that is the objective function of TD learning in not true
4) end of section 2.1 says “It is computed as” but the following equation just gives a form for the partial derivative
5) equation 2, x is not bounded
6) You state TC-loss has an unclear solution property, I don’t know what that means and I don’t think your approach is well justified either
7) Section 4.1 assumes linear FA, but its implied up until paragraph 2 that it has not assumed linear
8) treatment of n_t in alg differs from appendix (t is no time episode number)
9) Your method has a n_t parameter that is adapted according to a schedule seemingly giving it an unfair advantage over DQN.
10) Over-claim not supported by the results: “we see that HR-TD is able to find a representation that is better at keeping the target value separate than TC is “. The results do not show this.
11) Section 4.4 does not seem to go anywhere or produce and tangible conclusions
Things to improve the paper that did not impact the score:
0) It’s hard to follow how the prox operator is used in the development of the alg, this could use some higher level explaination
1) Intro p2 is about bootstrapping, use that term and remove the equations
2) Its not clear why you are talking about stochastic vs deterministic in P3
3) Perhaps you should compare against a MC method in the experiments to demonstrate the problem with TD methods and generalization
4) Section 2: “can often be a regularization term” >> can or must be?
5) update law is a odd term
6)” tends to alleviate” >> odd phrase
7) section 4 should come before section 3
8) Alg 1 in not helpful because it just references an equation
9) section 4.4 is very confusing, I cannot follow the logic of the statements
10) Q learning >> Q-learning
11) Not sure what you mean with the last sentence of p2 section 5
12) where are the results for Acrobot linear function approximation
13) appendix Q-learning with linear FA is not DQN (table 2) |
ICLR | Title
HR-TD: A Regularized TD Method to Avoid Over-Generalization
Abstract
Temporal Difference learning with function approximation has been widely used recently and has led to several successful results. However, compared with the original tabular-based methods, one major drawback of temporal difference learning with neural networks and other function approximators is that they tend to over-generalize across temporally successive states, resulting in slow convergence and even instability. In this work, we propose a novel TD learning method, Hadamard product Regularized TD (HR-TD), to reduce over-generalization and thus leads to faster convergence. This approach can be easily applied to both linear and nonlinear function approximators. HR-TD is evaluated on several linear and nonlinear benchmark domains, where we show improvement in learning behavior and performance.
1 INTRODUCTION
Temporal Difference (TD) learning is one of the most important paradigms in Reinforcement Learning (Sutton and Barto, 1998). Techniques based on combining TD learning with nonlinear function approximators and stochastic gradient descent, such as deep networks, have led to significant breakthroughs in large-scale problems to which these methods can be applied (Mnih et al., 2015; Silver et al., 2016; Schulman et al., 2015).
At its heart, the TD learning update is straightforward. v(s) estimates the value of being in a state s. After an action a that transitions the agent from s to next state s′, v(s) is altered to be closer to the (discounted) estimated value of s′, v(s′) (plus any received reward, r). The difference between these estimated values is called the temporal difference error (TD error) and is typically denoted as δ. Formally, δ = r + γv(s′)− v(s), where γ is the discount factor, and r + γv(s′) is known as the TD target.
When states are represented individually (the tabular case), v(s) can be altered independently from v(s′) using the update rule v(s) ← v(s) + αδ, where α is the learning rate. In fully deterministic environments, α can be set to 1, thus causing v(s) to change all the way to the TD target. Otherwise, in a stochastic environment, α is set less than 1 so that v(s) only moves part of the way towards the TD target, thus avoiding over-generalization from a single example. When, on the other hand, states are represented with a function approximator, as is necessary in large or continuous environments, v(s) can no longer be updated independently from v(s′). That is because s and s′ are likely to be similar (assuming actions have local effects), any change to v(s) is likely to also alter v(s′). While such generalization is desirable in principle, it also has the unintended consequence of changing the TD target, which in turn can cause the TD update to lead to an increase in the TD error between s and s′. This unintended consequence can be seen as a second form of over-generalization: one that can be much more difficult to avoid.
Past work has identified this form of over-generalization in RL, has observed that it is particularly relevant in methods that use neural network function approximators such as DQN (Mnih et al., 2015), and has proposed initial solutions (Durugkar and Stone, 2017; Pohlen et al., 2018). In this paper, we present a deeper analysis of the reasons for this form of over-generalization and introduce a novel learning algorithm termed HR-TD, based on the recursive proximal mapping formulation of TD learning (Bertsekas, 2011), which offers a mathematical framework for parameter regularization that allows one to control for this form of over-generalization. Empirical results across multiple
domains demonstrate that our novel algorithm learns more efficiently (from fewer samples) than prior approaches.
The rest of the paper is organized as follows. Section 2 offers a brief background on TD learning, the over-generalization problem, and optimization techniques used in the derivation of our algorithm. In Section 3, we discuss the state-of-the-art research in this direction. The motivation and the design of our algorithm are presented in Section 4. Finally, the experimental results of Section 5 validate the effectiveness of the proposed algorithm.
2 BACKGROUND
This section builds on the notation introduced in Section 1 to specify the problem of interest in full detail. We introduce the background for TD learning, over-generalization, and proximal mapping, which are instrumental in the problem formulation and algorithm design.
2.1 REINFORCEMENT LEARNING AND OVER-GENERALIZATION
Reinforcement Learning problems are generally defined as Markov Decision Processes (MDPs). We use the definition and notation as used in Sutton and Barto (2017), unless otherwise specified. In this paper, we focus on domains with large or continuous state spaces such that function approximation is needed. We define the value estimate of state s with parameter θ when following policy π as, vπ(s|θ) = Eπ [ Rt + γRt+1 + γ 2Rt+2 + . . . |St = s ] . Here Rt is the random variable associated with a reward at time t, and rt is used as an instantiation of this random variable. The optimal (true) value function v∗π satisfies the Bellman equation given as v ∗ π(s|θ) = Eπ [Rt + γv∗π(s′|θ)]. During TD learning, the estimated value function is altered to try to make this property hold. In effect, state values are updated by bootstrapping off of the estimated value of the predicted next states.
We focus on 1-step TD methods, i.e., TD(0), that bootstrap from the value of the immediate next state or states in the MDP to learn the value of the current state. The TD error δt(st, st+1|θ) to be minimized is as follows:
δt(st, st+1|θ) = (rt + γvπ(st+1|θ))− vπ(st|θ) In the following, δt(st, st+1|θ) is written as δt for short. When using function approximation and gradient descent to optimize the parameters, the loss to be minimized is the squared TD error. At the t-th time-step, the objective function used in TD learning is LTD = ‖rt+γvπ(st+1|θ)−vπ(st|θ)‖2. Similarly, the optimal action value function Q satisfies the Bellman optimality equation Q∗(st, at|θ) = Rt + γmax
a Q∗(st+1, a|θ). The objective used in Q-Learning is thus LQ =
‖rt + γmaxaQ(st+1, a|θ)−Q(st, at|θ)‖2. The partial derivative of v(st|θ) orQ(st, at|θ) with respect to θ is the direction in which TD learning methods update the parameters. We use gt(st|θ) and gt(st, at|θ) to refer to these vectors. In the linear case, v(st|θ) = θ>t φ(st), where φ(st) are the features of state st. In this case, gt(st, at|θ) is the feature vector φ(st, at), and in general, gt(st, at|θ) = ∂θQ(st, at|θ). It is computed as:
gt(st, at|θ) = ∂Q(st, at|θ)
∂θ θ ← θ + αδtgt(st, at|θ).
We have already briefly alluded to the issue of over-generalization in Section 1. One of the reasons we use function approximation is that we want the values we learn to generalize to similar states. But one of these similar states is likely to be the target of our Bellman equation v(st+1|θ). If the weights that correspond to large or important features in φ(st+1) are strengthened, then the TD error might not decrease as much as it should, or it might even increase. We refer to parameter updates that work against the objective of reducing the TD error as over-generalization.
2.2 PROXIMAL MAPPING FORMULATION OF TD LEARNING
In this section, we introduce the basics of proximal mapping, which provide the mathematical formulation of our algorithm design. A proximal mapping (Parikh and Boyd, 2013) proxf (w) associated
with a convex function f is defined as
proxf (w) = arg minx
( f(x) + 1
2 ‖w − x‖22
) (1)
Such a proximal mapping is typically used after a parameter update step to incorporate constraints on the parameters. Intuitively, the first term f(x) provides incentive to move x in the direction that minimizes f , whereas the second term 12‖w − x‖ 2 2 provides pressure to keep x close to w. If f(x) = 0, then proxf (w) = w, the identity function. f can often be a regularization term to help incorporate prior knowledge. For example, for learning sparse representations, the case of f(x) = β‖x‖1 is particularly important. In this case, the entry-wise proximal operator is:
proxf (w)i = sign(wi)max(|wi|−β, 0)
Proximal methods have been shown to be useful for various reinforcement learning problems, e.g., proximal gradient TD learning (Liu et al., 2015) integrates the proximal method with gradient TD learning (Sutton et al., 2009) using the Legendre-Fenchel convex conjugate function (Boyd and Vandenberghe, 2004), and projected natural actor-critic (Thomas et al., 2013) interprets natural gradient as a special case of proximal mapping. We now introduce the recursive proximal mapping formulation of TD learning algorithm (Bertsekas, 2011). At the t-th iteration, the TD update law solves a recursive proximal mapping, i.e., θt+1 = θt + αtδtgt(st), which is equivalent to
θt+1 = arg min x
{ 〈x,−δtgt(st)〉+ 1
2αt ||x− θt||22
} (2)
It should be noted that Eq. (2) is different from Eq. (1) in that Eq. (1) has an explicit objective function f to optimize. Eq. (2) does not have an explicit objective function, but rather corresponds to a fixed-point equation. In fact, it has been proven that the TD update term δtgt(st) does not optimize any objective function (Maei, 2011). Discussing this in details goes beyond the scope of the paper, and we refer interested readers to (Maei, 2011; Bertsekas, 2011) for a comprehensive discussion of this topic.
3 RELATED WORK
To the best of our knowledge, the closest work to ours to address the over-generalization problem is the Temporal Consistency loss (TC-loss) method (Pohlen et al., 2018) and the constrained TD approach (Durugkar and Stone, 2017).
The TC-loss (Pohlen et al., 2018) aims to minimize the change to the target state by minimizing explicitly a separate loss that measures change in the value of s′, i.e., L ( Vθ(s ′, a′)− Vθt−1(s′, a′) ) . When used in conjunction with a TD loss, it guarantees that the updated estimates adhere to the Bellman operator and thus are temporally consistent. However, there are some drawbacks to this method. Firstly, the asymptotic solution of the TC-loss method is different from the TD solution due to the two separate losses, and the solution property remains unclear. Secondly, each parameter component plays a different role in changing v(s′). For instance, if the component of θ is or close to 0, then this component does not have any impact on changing v(s′). Different parameter components, therefore, should be treated differently according to their impact on the value function changes (or action-value change in case of DQN).
Another recent work in this direction is the constrained TD (CTD) algorithm (Durugkar and Stone, 2017). To avoid the over-generalization among similar sates, CTD tends to alleviate overgeneralization by using the vector rejection technique to diminish the update along the direction of the gradient of the action-value function of the successive state. In other words, the real update is made to be orthogonal to the gradient of the next state. However, the CTD method suffers from the double-sampling problem, which is explained in detail in Appendix A. Moreover, since it mainly uses vector rejection, this method is not straightforward to extend to nonlinear function approximation, such as the DQN network, where over-generalization can be severe. Lastly, if the state representation of st and st+1 are highly similar, as in case of visual environments like Atari games, then the vector rejection causes the update to be almost orthogonal to the computed gradient.
4 HADAMARD PRODUCT REGULARIZED TD
In this section, we analyze the reason for over-generalization and propose a novel algorithm to mitigate it.
4.1 ANALYSIS OF OVER-GENERALIZATION
Consider the update to the parameter θt as follows, with TD error δt, learning rate α and a linear function approximation v(st|θt) with features φ(st) and gradient g(st|θt) = φ(st).
θt+1 = θt + αδ(st, st+1|θt)φ(st) If we substitute the above value for θt+1, the TD error for the same transition after the update is
δ(st, st+1|θt+1) = rt − (θ>t+1φ(st)− γθ>t+1φ(st+1)) = δ(st, st+1|θt)− αδ(st, st+1|θt) ( φ(st) >φ(st)− γφ(st)>φ(st+1) ) ,
and thus δ(st, st+1|θt)− δ(st, st+1|θt+1) = αδ(st, st+1|θt) ( φ(st) >φ(st)− γφ(st)>φ(st+1) ) .
We see above that the decrease in the TD error at t depends on two factors, the inner product of the gradient with features of st, and its inner product with the features of st+1. This decrease will be reduced if φ(st) and φ(st+1) have a large inner product. If this inner product exceeds 1γφ(st)
>φ(st), then in fact the error increases. Thus over-generalization is an effect of a large positive correlation between the update and the features of st+1, especially when contrasted with the correlation of this same update with the features of st.
We are then left with the following question: what kind of weight update can maximize the reduction in δt? Merely minimizing the correlation of the update with φ(st+1) is insufficient, as it might lead to minimizing the correlation with φ(st). This is the issue that Constrained TD (Durugkar and Stone, 2017) faces with its gradient projection approach. Hence, we must also maximize its correlation with φ(st).
To examine this effect, we consider the properties of parameters that we should avoid changing, to the extent possible. Consider the linear value function approximation case: vθ(s) = φ(s)>θ. For example, consider st and st+1 with the features φ(st) = [0, 2, 1], and φ(st+1) = [2, 0, 1]. Then for two different weights, θ1 = [0, 0, 2] and θ2 = [1, 1, 0], we have the same value for both these parameter vectors at both st and st+1, i.e. φ(st)>θ1 = φ(st+1)>θ1 = φ(st)>θ2 = φ(st+1)>θ2 = 2. However, the results of the Hadamard product (◦) of these parameters with the feature vectors are different, i.e.
φ(st) ◦ θ1 = φ(st+1) ◦ θ1 = [0, 0, 2], φ(st) ◦ θ2 = [0, 2, 0], φ(st+1) ◦ θ2 = [2, 0, 0],
where the Hadamard products of θ1 with φ(st) and φ(st+1) are more correlated than those of θ2. An update to the last weight of θ1 will cause the values of both st and st+1 to change, but an update to the second weight of θ2 will affect only st. In fact, unless both the first and the second weights change, st and st+1 do not change simultaneously. In this sense, θ1 tends to cause aggressive generalization across the values of st and st+1, and thus the TD update to θ1 should be regularized more heavily. The Hadamard product of the weights and the features allows us to distinguish between θ1 and θ2 in this way.
Motivated by this observation, we aim to reduce the over-generalization by controlling the weighted feature correlation between the current state g(s)◦θ and the successive state g(s′)◦θ, i.e., Corr(g(s)◦ θ, g(s′) ◦ θ).
4.2 ALGORITHM DESIGN
Given the constraint as shown above, the constrained Mean-Squares Error (MSE) is formulated as
θ∗ = arg min θ
1 2 ||V − vθ||22, s.t. Corr(g(s) ◦ θ, g(s′) ◦ θ) ≤ ρ, (3)
Algorithm 1 Hadamard product Regularized TD (HR-TD) Learning Require: T , αt(learning rate), γ(discount factor), η(initial regularization parameter). Ensure: Initialize θ0.
for t = 1, 2, 3, · · · , T do ηt = η/t Update θt+1 according to Eq. (5). end for
where V is the true value function. Using the recursive proximal mapping with respect to the constrained objective function, the parameter update per-step of Eq. (3) can be written as
θt+1 = arg min θ
{ − θ> ( E[δt]g(st) ) + 1
2αt ||θ − θt||22
} , s.t. Corr(g(st) ◦ θ, g(st+1) ◦ θt) ≤ ρ.
Using Lagrangian duality, it can be reformulated as
θt+1 = arg min θ
{ − θ>(E[δt]g(st)) + 1
2αt ||θ − θt||22 + ηCorr(g(st) ◦ θ, g(st+1) ◦ θt)
} ,
where η is the factor that weights the constraint against the objective. The closed-form solution to the weight update is
θt+1 = θt + αt ( E[δt]g(st)− η(g(st) ◦ g(st+1) ◦ θt) ) (4)
Using sample-based estimation, i.e., using gt(s) (resp. gt(s′)) to estimate g(s) (resp. g(s′)) , and using δt to estimate E[δt], the Eq. (4) becomes
θt+1 = θt + αt ( δtgt(st)− η(gt(st) ◦ gt(st+1) ◦ θt) ) (5)
In the proposed algorithm, if the component of the weights helps decrease the Hadamard product correlation, then it is not penalized. Now the algorithm for value function approximation is formulated as in Algorithm 1, and the algorithm for control is formulated in Algorithm 2.
4.3 HADAMARD PRODUCT REGULARIZED DEEP Q NETWORK
In DQN, the value function is learned by minimizing the following squared Bellman error using SGD and backpropagating the gradients through the parameter θ
LDQN = 1 2 ‖rt + γQ(st+1, at+1|θ′)−Q(st, at|θ)‖2. (6)
Here, θ′ are the parameter of the target network that is periodically updated to match the parameters being trained. The action at+1 is chosen as arg maxaQ(st+1, a|θ′) if we use DQN, and arg maxaQ(st+1, a|θ) if we use Double DQN (DDQN) (Van Hasselt et al., 2016). We use DDQN in experiments as DQN has been shown to over-estimate the target value.
Let φ(st|θ) be the activations of the last hidden layer before the Q-value calculation and θ−1 be the corresponding weights for this layer. The Correlation can be written as Lcorr = Corr(φ(st|θ)◦θ, φ(st+1|θ)◦θt). We do not use the target network when calculating this loss. The loss used in Hadamard regularized DDQN is then an η-weighted mixture of Eq. (6) and this loss
LHR−TD = LDQN + ηLcorr (7)
4.4 THEORETICAL ANALYSIS
In this section, we conduct some initial analysis of Algorithm 1 with linear function approximation. For simplicity, we only discuss the linear case, i.e., ∂vθ(st) = φ(st), ∂vθ(st+1) = φ(st+1). If Algorithm 1 converges, the update of the weights according to Eq. (5) should satisfy the following condition
E[δtφ(st)− η(φ(st+1) ◦ θt ◦ φ(st))] = 0.
Rewriting δt and denoting ∆φt = φ(st)− γφ(st+1), we have
E[φ(st)rt] = E[φ(st)∆φ>t + ηM ]θt,
where M = E[Diag(φ(st) ◦ φ(st+1))] = E[Diag(φ(st)φ>(st+1))]. Thus we have
E[φ(st)(∆φ(st))> + ηM ] = E[φ(st)φ>(st)− γφ(st)φ>(st+1) + ηDiag(φ(st)φ>(st+1))]
If we set η → γ, we observe that the second and third terms in the RHS above cancel out in the diagonal element. Consider the scheme where we initialize η = γ and then reduce it as over the training process. It is equivalent to slowly introducing the discount factor into the error computation. It has been shown (Prokhorov and Wunsch, 1997) that instead of the discount factor γ provided by the MDP, a user-defined time-varying γt can help accelerate the learning process of the original MDP w.r.t γ. This previous work suggests using a small discount factor γt < γ in the beginning, and then increasing γt gradually to γ. HR-TD results in a similar effect without defining a separate γt and its schedule.
5 EXPERIMENTS
We evaluate HR-TD on two classical control problems: Mountain Car and Acrobot using both linear function approximation with Fourier basis features and nonlinear function approximation using Deep Neural Networks. We verify that this algorithm scales to complex domains such as the Atari Learning Environment (Bellemare et al., 2013), by evaluating our approach on the game of Pong. We utilize OpenAI gym (Brockman et al., 2016) to interface our agent with the environments. We compare HR-TD to the baselines by using the following metrics: 1) Accumulated reward per episode. 2) Average change in the target Q value at s′ after every parameter update. For comparison, we consider Q learning and Q learning with TC loss (and DDQN for neural networks).
Based on our analysis, we expect HR-Q learning to begin improving the policy earlier in the learning process, and we expect HR-TD to be able to evaluate the value of a policy just as well as TD. We evaluate the change of the value of the next state as well, and consider whether HR-TD is able to reduce this change as a consequence of the regularization. We note, however, that this quantity is diagnostic in nature, rather than being the true objective. It would definitely be possible to minimize this quantity by making no learning updates whatsoever, but then we would also observe no learning.
5.1 EVALUATION
Before we consider the effect of HR-Q on control tasks, we compare the purely evaluative property of HR-TD. Here, we evaluate a trained policy on the Mountain Car domain. We run this experiment
Algorithm 2 Hadamard product Regularized Q (HR-Q) Network Require: T , αt(learning rate), γ(discount factor), η(initial regularization parameter). Ensure: Initialize θ0.
repeat ηt = η/t Choose at using policy derived from Q (e.g., -greedy) Take at, observe rt, st+1 Add st, at, rt, st+1 to Replay Buffer Sample batch from Buffer and Update θt+1 using backpropagation to minimize Eq. (7). t← t+ 1 until training done
10 times for each method. For each experiment, the policy is executed in the environment for 10000 steps, resetting the agent to one of the start states if it terminates. We learn the value function using TD by sampling a batch of transitions from this dataset and take 10,000 learning steps per run.
The metric we compare is the MSE with the Monte Carlo estimate of the same run, taken over 300,000 transitions. The MSE value for each experiment is calculated by averaging the MSE of the last 10 training steps, to reduce sampling error. Finally, we take the mean of these errors across the 10 runs and present them in Table 1. TD and HR-TD reach roughly the same value for all the runs. TC, however, converges to a different minimum that leads to a very high MSE. This may be because the competing TD and TC objectives in this method cause the learning to destabilize. If we lower the learning rate for TC, then we avoid this behavior but also do not converge in the given max number of training steps.
5.2 NEURAL NETWORKS
We now consider the performance of HR-Q learning when using Neural Networks for function approximation. We consider two domains, Mountain Car and Acrobot, but we do not perform any basis expansion and feed the state values directly into a neural network with a single hidden layer of 64 units.
We compare the performance of HR-Q in Figure 1 and 2, with Q-Learning and Q-learning with TC loss. We use DDQN Van Hasselt et al. (2016) as the underlying algorithm for Q-learning. Details of
the network and hyperparameters are in Appendix B. We take 20 independent runs, with a different seed in each run used to initialize Tensorflow, NumPy, and the OpenAI Gym environment. Each run is taken over 1000 episodes. In both these experiments, we see HR-TD starts to learn a useful policy behavior before either of the other techniques. Interesting to note is that in Fig. 1b, HR-TD learns a state representation that causes the target value to change less than DQN but does not restrict it as much as TC. But in Fig. 2b we see that HR-TD is able to find a representation that is better at keeping the target value separate than TC is. However, in both these cases, the value function that is learned seems to be quickly useful for learning a better policy.
5.3 ATARI
We also validate the applicability of this technique to a more complex domain and a more complex network. We apply the HR-Q to DDQN on the Atari domain to verify that the technique is scalable and that the findings and trends we see in the first two experiments carry forward to this challenging task. We use the network architecture specified in Mnih et al. (2015), and the hyper-parameters for TC as specified in Pohlen et al. (2018). Experimental details are specified in Appendix B. From the results, we see that HR-TD does not interfere with learning on the complex network, and does about as well as DDQN.
5.4 LINEAR FUNCTION APPROXIMATION
Finally, we study HR-TD with the linear function approximation, we look at the Mountain Car domain. We expand the feature space using Fourier basis functions (Konidaris et al., 2011). All methods are trained with an order 6 Fourier basis expansion for Mountain Car Konidaris et al. (2011), which leads to 36 features for Mountain Car. We use a constant learning rate α = 0.01 for all three methods. For HR-TD we initialize the regularization factor η = 0.3. Each episode is run
until we receive an episode termination signal from the Gym wrapper, which is a maximum of 200 steps if the goal is not reached. We show the learning curves for 1000 episodes, averaged over 20 independent runs. In Figure 4, we see that HR-Q and TC perform better than Q-learning. HR-Q also shows a more stable updates (changes value of next state less) than Q learning, and comparable to Q-learning with the added TC loss over the course of training.
6 CONCLUSION
In this paper, we analyze the problem of over-generalization in TD learning with function approximation. This analysis points to the potential pitfalls of over-generalization in TD-learning. Based on the analysis, we propose a novel regularization scheme based on the Hadamard product. We also show that with the right weight on the regularization, the solution of this method is the same as that of TD. Finally, we experimentally validate the effectiveness of our algorithm on benchmarks of varying complexity.
A PROBLEM WITH CTD: DOUBLE SAMPLING PROBLEM
Double sampling comes into effect whenever we need the product of two expectations. If an expression contains 3 expectations we will need three independent samples. Below we will first write out why residual gradients have a double sampling problem and why TD doesn’t. Then we shall show why CTD has this problem, and might actually suffer from a triple sampling problem. Note that the double-sampling problem only exists in stochastic MDP problems. In a Deterministic MDP, double sampling will not be an issue.
δ(s) = r(s, a, s′) + γV (s′|θ)− V (s|θ)
L = 1 2 E[‖δ(s)‖]2
∂L ∂θ = E [δ(s)(g(s)− g(s′))] . . .Residual Gradient
= E[δ(s)]E[g(s)− g(s′)] ∂L ∂θ = Eδ(s)g(s) . . .TD update ∂L ∂θ = E [ δ(s)g(s)− < g(s), g(s ′) > ‖g(s′)‖22 g(s′) ] . . .Constrained TD update
In the constrained TD update, the first term is the regular TD update, which has no double-sampling issues. However, the second term, −<g(s),g(s
′)> ‖g(s′)‖22 g(s′), involves computing s′ in multi-places, and will need to sample multiple times to have an unbiased estimation, and thus have the doublesampling problems.
B EXPERIMENT DETAILS
B.1 LINEAR FUNCTION APPROXIMATION
Mountain Car: Basis Function: Fourier Basis, order 6 Max steps per episode: 200 Number of episodes: 500
B.2 MLP-DQN
Layers: [64], Activation: ReLU, Optimizer: Adam Replay Memory size: 50000 batch size: 32 minimum (for exploration) : 0.01 is decayed over 5% of the episodes η is decayed as ηt = ηT+1 , where T is the episode number
B.2.1 MOUNTAIN CAR
Max steps per episode: 200, Number of episodes: 1000
B.2.2 ACROBOT
Max steps per episode: 500, Number of episodes: 200
B.3 DQN FOR ATARI
Network: DQN architecture from Mnih et al. (2015), Optimizer: Adam Replay Memory size: 100000, minimum : 0.01 is decayed over 5% of the frames Training Frames: 10M game frames, fed 4 at a time to network. (2.5 M agent steps) η is decayed as ηt = ηT+1 , where T is integer value of t 5000 , and t is the iteration number.
C POLICY EVALUATION LEARNING CURVES | 1. What is the main contribution of the paper, and how does it address the problem of 'over-generalization' in conventional TD algorithms?
2. What are the weaknesses of the paper regarding its characterization of the problem and the claim of theoretical analysis?
3. How convincing are the empirical results presented in the paper, and what are the issues with their presentation?
4. How might simple experiments tease apart the effects of HR-TD's bias toward small weights and its performance compared to regular TD and a standard bias toward small weights?
5. Can you suggest any additional experiments that could help establish the effectiveness and limitations of HR-TD? | Review | Review
The paper introduces HR-TD, a variation of the TD(0) algorithm. The variant is meant to ameliorate a problem of ‘over-generalization’ with conventional TD. This problem is briefly characterized, but primarily it is presumed to be established by prior work. The algorithm is simple and a series of experiments are presented with it applied to Mountain Car, Acrobot, and Atari Pong, with both linear function approximation and neural networks (DDQN). It is claimed that the results establish HR-TD as an improvement over TD. However, I found the results unconvincing because they were statistically insufficient, methodologically flawed, and too poorly presented for me to be confident of the meaning of numbers reported. In addition, it is not hard to imagine very simple problems where the HR-TD technique would be counterproductive, and these cases were not included in the experimental testbeds.
The first weakness of the paper is with its characterization of the problem that it seeks to solve: over-generalization. This problem is never really characterized in this paper. It instead refers instead to two other papers, one published only in a symposium and the other with no publication venue identified.
The second weakness of the paper is the claim that it has done a theoretical analysis in Section 4.4. I don’t see how this section establishes anything of importance about the new method.
The problem with the main results, the empirical results, is that they do not come close to being persuasive. There are many problems, beginning with there simply not being clear. I read and reread the paragraphs in Section 5.1, but I cannot see a clear statement of what these numbers are. Whatever they are, to assess differences between them would require a statistical statement, and there is none given. Moreover to give such a statistical statement would require saying something about the spread of the results, such as the empirical variance, but none is given. And to say something about the variance one would need substantially more than 10 runs per algorithm. Finally, there is the essential issue of parameter settings. With just one number given for each algorithm, there are no results or no statement about what happens as the parameters are varied. Any one of these problems could render the results meaningless; together they surely are.
These problems become even greater in the larger problems.
A nice property of HR-TD is that it is simple. Based on that simplicity we can understand it as being similar to a bias toward small weights. Such a bias could be helpful on some problems, possibly on all of those considered here. In general it is not clear that such a bias is a good idea, and regular TD does not have it. Further, HR-TD does not do exactly a bias to small weights, but something more complicated. All of these things need to be teased apart in careful experiments. I recommend small simple ones.
How about a simple chain of states that are passed through reliably in sequence leading to a terminal state with a reward of 1000 (and all the other rewards 0). Suppose all the states have the same feature representation. If gamma=1, then all states have value 1000, and TD will easily learn and stick at this value even for large alpha, but HR-TD will have a large bias toward 0, and the values will converge to something significantly less than the true value of 1000.
That would be an interesting experiment to do. Also good would be to compare HR-TD to a standard bias toward small weights to see if that is sufficient to explain the performance differences. |
ICLR | Title
Improve Novel Class Generalization By Adaptive Feature Distribution for Few-Shot Learning
Abstract
In this work, we focus on improving the novel class generalization of few-shot learning. By addressing the difference between feature distributions of base and novel classes, we propose the adaptive feature distribution method which is to finetune one scale vector using the support set of novel classes. The scale vector is applied on the normalized feature distribution and by using one scale vector to reshape the feature space manifold, we obtain consistent performance improvement for both in-domain and cross-domain evaluations. By simply finetuning one scale vector using 5 images, we observe a 2.23% performance boost on 5-way 1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes. This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB.
1 INTRODUCTION
With the plethora of available large-scale data, deep learning has achieved significant advancements. However multiple factors such as high labelling costs, scarce availability of classes of interest or the expensive need for experts for label generation set limits of applying large-scale data. To address this challenge, the problem of few-shot learning was formulated which has received considerable attention in recent years Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle (2016); Hariharan & Girshick (2017).
For a supervised learning problem with data set (x1, y1), ...(xn, yn)(xi ∈ X feature space, yi ∈ Y label space), by using the hypothesis class(h(.;w)), we want to minimize l(h(x;w), y) on new samples. With the assumption that training samples and test samples are i.i.d from the same unknown distribution D over X ×Y , the problem is optimized over the Empirical Risk Minimization(ERM). For multi-class classification with deep neural network, the hypothesis class related to the scenario can be divided into two functionalities: the feature extractor Fθ(xi) parameterized by θ, the classifier C(· | w) for a given class weight vector w. Basically to achieve a good classification performance over the large-scale dataset, h(.;w) is expected to be highly invariant and this property empowers the feature extractor Fθ(xi) with good feature invariance ability if we consider variations that are generally in the objects such as shapes, lights and etc.
Few-shot learning proposes a great challenge as the estimation of the distribution is hard to achieve with a few samples. Meta-learning methods on few-shot learning lead a direction of adapting to a hypothesis class with few samples, which directly back-propagates the loss between testing set with the h(.;w) proposed with the training set. Recent work meta-Baseline Chen et al. (2020)proposed to conducts meta-training with a pre-trained feature extractor on base classes which leads to a large-margin performance improvement of meta-training. Moreover, they observe that during meta-training stage, models are better generalized on the base classes while evaluation performance on novel classes largely dropped.
The novel class generalization which is defined as evaluation performance on novel classes following Chen et al. (2020) is essential for improving few-shot learning into practice. Training of algorithms on few-shot learning are conducted with base classes which are relatively large-scale in the sense
of plenty number of classes with hundreds of images. Methods in metric-based learning Chen et al. (2019); Gidaris & Komodakis (2019); Wang et al. (2018); Gidaris & Komodakis (2018) and metalearning Chen et al. (2020) prove that training in this way benefits the capture of large variations which is crucial for discriminative features. However, as the feature extractor on base classes is trained under maximum likelihood, features are also trained to be invariant for discriminating these base classes, as shown in Fig.2. Then the evaluation on novel classes would suffer from the feature distribution difference between base and novel classes, and cross-domain between base and novel classes could enlarge this feature distribution difference. Objects(or images) in different domains carry different aspects of information which leads to different discriminative features or features in common among categories.
Attempts of improving novel class generalization include finetuning method proposed in Chen et al. (2019). In Chen et al. (2019), they proposed to finetune the novel class weights using the support set of novel classes with competitive results. However if feature distribution of novel classes suffers from scattering, even with a plenty of data finetuning the novel class weights without any optimization on the feature side is not promising for finding a good decision boundary, not to mention with only a few samples.
In our work, we propose the adaptive feature distribution to improve the novel class generalization. Following the idea of finetuning using a handful of samples, we apply a non-parametric distance first to construct the hypothesis class and then by only finetuning a scale vector which applied on the normalized feature distribution, we achieve the effects of adaptive feature distribution on novel classes.
Our Contributions: 1) We address the importance of further understanding the feature distribution for novel classes. Using DB-index which measures the quality of feature distributions for novel classes to select feature extractors, we observe a consistent performance boost on all three evaluation datasets. We believe introducing analysis on feature distributions and clustering quality of novel classes is informative to the community. 2) We propose to improve novel class generalization through adapting the feature distribution of novel classes. And by only finetuning one scale vector using support sets of novel classes, we showcase the supreme generalization of this method especially on cross-domain evaluations. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB. 3) This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning.
2 PRIOR ART
There have been many approaches to few-shot learning explored recently, namely are fast-adaptation methods Finn et al. (2017); Rusu et al. (2018); Sun et al. (2019); Chen et al. (2020), model optimization based methods Ravi & Larochelle (2016), metric learning based methods Vinyals et al. (2016); Snell et al. (2017); Ren et al. (2018); Sung et al. (2018); Guo & Cheung (2020); Li et al. (2020) and methods which use ridge regression and support vector machine Bertinetto et al. (2018); Lee et al.
(2019). There have also been studies focusing on discovering projective feature embeddings Simon et al. (2018; 2020). Recently, a few studies utilized a variety of techniques in machine learning towards few shot classification. Techniques like self-supervised training, self-training through semisupervised learning and model ensembles showed a boost result when applied on few-shot learning problem Gidaris et al. (2019); Dvornik et al. (2019); Li et al. (2019b). Modules were also invented to enhance feature discrimination Li et al. (2019a); Hou et al. (2019). Recently approaches have also explored combination with Graph Neural Networks Garcia & Bruna (2017); Kim et al. (2019).
3 ADAPTIVE FEATURE DISTRIBUTION: FINE-TUNING THE SCALE OF FEATURE DISTRIBUTION ON NOVEL CLASSES
In this section, we introduce how we realize the adaptive feature distribution with a learnable scale vector and the effects on novel class feature space by only finetuning the scale vector with a few samples.
The Few-Shot Problem Formulation. Evaluation datasets in few-shot learning are separated into base, validation and test classes. Base classes which is used in training involves a relatively large number of labelled training samples. And validation classes or test classes are treated as novel classes, which correspondingly used for validation and testing purpose. For few-shot learning scenarios, one episode is defined as K-way N -shot learning where K is the number of classes, N is the number of training images(support set) and K classes are firstly sampled from the novel classes; N samples in the support set as well as the query set(samples used for evaluating the episode performance) are sampled within each K classes. For one K-way N -shot episode, we use Sk and Qk to denote the support and query set accordingly for k ∈ K novel classes. We use a pre-trained feature extractor Fθ to subtract features. We use fi = Fθ(xi) to represent the feature for xi. We first add a feature normalization layer with a scale vector s:
f̄i = fi − µ σ ∗ s (1)
Where: µ = 1N ∑ i fi and σ = 1 N ∑ i(fi − µ)2.
In this layer, features from all training samples are first normalized in a way that values of every element on the feature vectors are regularized by following the normal distribution. A scale vector s is then multiplied with the normalized feature. s serves as the ”adaptive” part that by tuning the value of s, we are scaling the normalized feature distribution. s is flexible in the sense that every element on s scales up or down on every element of features and this in general leads to the reshape of feature space manifold. Then by fine-tuning s with classification loss on novel classes, we expect to the reshape of feature space manifold could fast adapt the features for novel classes especially on cross-domain cases.
In the fine-tuning stage, we first construct our evaluation metrics in an non-parametric way. We use average feature of the support set Sk as the class weight wk with a softmax loss:
Lf = − 1
N N∑ i=1 logPyi = − 1 N N∑ i=1 log exp zyi∑K k=1 exp zk
(2)
Where zj = w T j · f̄i = f̄Ti · 1
N ∑ x∈Sj f̄ (3)
By using this non-parametric metrics, we decrease the number of parameters to be trained in the fine-tuning stage while still follows the maximum likelihood estimation to predict the probability p(y|x). And this allows flexibility of fine-tuning the feature space with adaptive feature distribution. We analyze the gradient flow in the fine-tuning stage in the following.
The derivative ofzj with f̄i is: ∂zj ∂ f̄i = wj (4)
For an input xi, the derivative of zj with Lf is:
∂Lf ∂zj = { Pj − 1 j = yi Pj j 6= yi
(5)
∂Lf ∂ f̄i = (Pyi − 1)wyi + K∑ j 6=yi Pjwj (6)
Meanwhile as s is element-wisely multiplied with fi, the gradient at location c for s is(we omit the notation of location c to simplify the notation):
∂f̄i ∂s = fi − µ σ
(7)
Then we have the gradient for s at any location on s with sample xi as:
∇s = fi − µ σ [(Pyi − 1)wyi + K∑ j 6=yi Pjwj ] (8)
For fine-tuning only using K-way N-shot samples, Pyi ' 1(for 1-shot case, Pyi = 1) the gradient during training can be approximated as:
∇s = fi − µ σ K∑ j 6=yi Pjwj = fi − µ σ K∑ j 6=yi [Pj ∑ x∈Sj fx] (9)
To simplify the notation, we use the gradient for 1-shot case to conduct further discussion, which is:
∇s = fi − µ σ K∑ j 6=yi Pj fj − µ σ
(10)
By conducting the gradient descent, we have s = s−∇s. To give a direct impression of how this fine-tuning changes the feature space manifold, we illustrate the change on s brought by gradient descent intuitively. First of all, the normalization on f ensures that the value is ”soft bounded”, which will not cause the extreme values on the gradient. For some locations where elements are encoded ”common” information, values of fi and fj are similar. And in the opposite way, elements in other locations are encoded ”discriminative” information where values of fi and fj are largely different. In this case,∇s could be relatively large or negative which leads to scaling up the feature distribution at those locations. Then the difference between features are further enlarged correspondingly. In this case, the manifold of the feature space will fast adapt to the shape where distinguished parts are enlarged.
4 OVERALL FRAMEWORK
In this section, we introduce the overall framework that we conduct the few-shot classification problem.
4.1 TRAINING CLASSIFICATION ON BASE CLASSES
The model Fθ(x) that is trained on the base classes Kbase. To obtain a better feature invariance, we use the l2-normalized Softmax Chen et al. (2019); Ranjan et al. (2017); Wang et al. (2017); Qi et al. (2018) with cross entropy loss, which utilize softmax under the constraint of ‖wyi‖ 2 2 = 1 and ‖Fθ(xi)‖22 = 1:
LSM = − 1
Nbase Nbase∑ i=1 log expS cos(wTyi , Fθ(xi))∑Kbase k=1 expS cos(w T k , Fθ(xi))
(11)
4.2 EVALUATION ON NOVEL CLASSES
Given an K-way N -shot episode of few-shot classification, for each class k ∈ K we have a support set Sk = (x1, y1), ·, (xN , yN ) and a query set Qk = (x1, y1), ·, (xM , yM ). With the pretrained feature extractor Fθ(x), we follow the same metric of cosine distance in equation 11 when evaluating on novel classes; and the novel class weight wk is the average feature of the support set Sk Qi et al. (2018); Chen et al. (2020):
wk = 1
N ∑ x∈Sk Fθ(x) (12)
The predicted probability that x ∈ Q belongs to class k is:
p(y = k|x) = exp cos(w T k , Fθ(x))∑K
j=1 exp cos(w T j , Fθ(x))
(13)
4.3 FINE-TUNING SCALE VECTOR ON NORMALIZED FEATURE DISTRIBUTION.
For the fine-tuning part, we conduct experiments with data augmentation and without data augmentation separately. With data augmentation, when we construct our the non-parametric evaluation metrics in equation. 2 the average feature used for novel class weight are generated from samples without data augmentation while features as input to the evaluation metrics are from samples after data augmentation. By doing this, we ensures the minimum change of the novel class prototype(class weight) and the maximum of sample variations around the class prototype. The fine-tuning without data augmentation follows the methodology in Section 2.
4.4 MODEL SELECTION FOR NOVEL CLASSES.
After we train the classification on base classes, we come to the model selection of using which model as the feature extractor for novel classes. The feature extractor with the best classification accuracy or from the later epochs may not be a good choice. To obtain a high classification accuracy, features trained by supervised classification at the later stage of training may suffer the ”overfitting” to the seen classes. In other words, features would be projected precisely to directions of class weight vectors in order to get a high classification accuracy. By using these models as the feature extractor for novel classes, features of the novel classes could be separately projected onto the directions of the base classes which enlarge the scattering of that feature distribution indeed. Using the fewshot performance on validation set could be one choice, however as we are approaching the adaptive feature distribution, we consider the model selection from the perspective of measuring the quality of feature distribution. We use DB-index Davies & Bouldin (1979) as the criterion for model selection, which evaluates the clustering quality by considering the separation of the clusters and the tightness inside the clusters. And interestingly, we found that models with lower DB-index are generally models around the epoch after the first time of decreasing the learning rate. In our experiments, models with lower DB-index on validation set are selected.
5 EXPERIMENTAL VALIDATION
We evaluate the our adaptive feature distribution method in both in-domain case and cross-domain case. In-domain case is defined as the base and novel classes are from the same datsets and crossdomain case refers to the situation that base and novel classes are from different datasets and generally the datasets have domain difference.
5.1 EVALUATION DATASETS AND IMPLEMENTATION DETAILS
5.1.1 EVALUATION DATASETS
Dataset 1: mini-ImageNet Vinyals et al. (2016) is a standard benchmark for few-shot image classification benchmark, which consists of 100 randomly chosen classes from ILSVRC-2012 Russakovsky et al. (2015). And these classes are randomly split into 64, 16 and 20 classes for metatraining, meta-validation and meta-test set respectively. Each class contains 600 images of size 84× 84. We use the common split used in Lee et al. (2019). Dataset 2: tiered-ImageNet Ren et al. (2018) is a larger subset of ILSVRC-2012 Russakovsky et al. (2015), composed of 608 classes which are split into meta-training, meta-validation and meta-testing set with 351, 97 and 160 classes respectively. All images are of the size 84× 84. Dataset 3: CUB-200-2011 Wah et al. (2011) contains 200 classes and 11,788 images in total. Following the evaluation protocol of Hilliard et al. (2018), the dataset is split into 100 base, 50 validation and 50 novel classes. We use the same splits as Chen et al. (2019) for testing. This dataset serves as the test set for the cross-domain evaluation.
5.1.2 ARCHITECTURE AND TRAINING DETAILS
Baseline Network Architecture. We utilize the ResNet-12 network architecture following Lee et al. (2019) to train the baseline and backbone classification model. However in contrast to Lee et al. (2019), we use a global average pooling after the last residual block following which the feature length becomes 640 and the feature layer is followed by a 1-d batchnorm layer without affine.
Training hyperparameters. All networks were trained with SGD along with Nesterov momentum of 0.9 and weight decay of 5 × 10−4. The initial learning rate is set as 0.1 which was decreased by a factor of 10 every 50 epochs for a total of 150 epochs. The batch size was kept at 256. Data argumentation was applied for baseline classification following Lee et al. (2019), which included horizontal flip, random crop, and color (brightness, contrast, and saturation) jitter.
Fine Tuning on the Novel Class Support Set. We finetune on the novel class training set using Adam with learning rate 5 × 10−3 by back-propagating the gradient from the whole batch, and early stop in this case is crucial that we finetune 3 epochs for 1-shot and 5 epochs for 5-shot case in cross-domain cases and 3 epochs for both 1-shot and 5-shot in in-domain cases. The scale vector is initialized as 1.
In our experiments, Baseline refers to the pre-trained feature extractor of the last epoch for training; Baseline* refers to the pre-trained feature extractor selected using density based clustering index. And we use Baseline* as the feature extractor for all our finetuning experiments.
5.2 COMPARING PERFORMANCE ON IN-DOMAIN AND CROSS-DOMAIN CASES
Performance of model selection are consistent. We observe that using the DB-index to select feature extractors gain consistent performance improvement among all three evaluation datasets. And this could serve as a good sign of studying the feature transfer-ability from the perspective of feature distribution.
AFD Shows Improvement on In-Domain Evaluations. Shown in Table.1, AFD improves the performance on 5-shot with 0.22% and 0.2% separately for miniImagenet and tieredImagenet. One thing to notice is that the performance of our Baseline* already surpass performance of most works. AFD still leads to performance improvement while using a well presumed feature extractor.
AFD shows superior generalization to cross-domain evaluations. Shown in Table.2, by simply finetuning using 5 images for 1-shot case and 25 images for 5-shot case, we observe 1.73% and 1.08% performance improvement from statistical results among 2000 episodes.
5.3 ABLATION STUDIES ON FINE-TUNING
The results of ablation studies are shown in Table.3.
Effects of Applying Data Augmentation during Finetuning: The major obstacle of few-shot learning is the lack of samples which is essential for improving the novel class generalization. Although we only train 3 epochs, the effects of data augmentation are still obvious. Only for 1-shot case with miniImagenet-trained feature extractor the performance is worse than without using data augmentation. This could be caused with the reason that the feature extractor is trained with a relatively small data, features abstracted then are not stable to optimize which is serve when adding data augmentation with only 5 training samples. Otherwise, we observe performance improvement of 0.47% for 5-shot with miniImagenet-trained feature extractor and 0.18%, 0.65% for 1-shot and 5-shot with tieredImagenet-trained feature extractor. As we only use the basically simple data aug-
mentation strategy, further work to explore the effective data augmentation for finetuning on novel classes are promising.
Effects of Different Feature Extractor: Firstly by using a feature extractor trained with larger dataset, the performance on cross-domain cases boost a lot which indicates the importance of a good feature embedding. And for AFD, the performance improvement on 1-shot are 1.73% and 1.08% for miniImagenet model and tieredImagenet model; on 5-shot are 0.51% and 0.27% for miniImagenet model and tieredImagenet model. This illustrates that AFD are able to fast adapt features especially when the quality of feature embedding is not good. However, with a better feature extractor allows better improvement of using data augmentation in AFD as discussed above.
Effects of Different Metrics: We compare the results of using different metrics(dot-product and cosine metrics with scale Wang et al. (2017)) in our non-parametric evaluation for fine-tuning. The performance is almost the same. As different metrics affect how well can we achive the predicted probability and in our case, as illstrated in Section 2, the predicted probability is around 1 already. Then different metrics serve similar efforts of adapting features for novel classes.
The Importance of Fine-tuning Features: We compare AFD with only finetuning the novel class weight method. For finetuning the novel class weight, we use average features from support set as the weight initialization and the hyper-parameters settings are the same as mentioned above. We observe performance drop by only finetuning the novel class weight, which are 0.66%, 0.92% for 1-shot and 5-shot with mini-ImageNet trained feature extractor, 0.42%,0.07% for 1-shot and 5-shot with tiered-ImageNet trained feature extractor. For cross-domain cases, features for novel classes are not well discriminative and constrained for the same class. As features are not optimized, only finetuning the novel class weights linear relating to features will actually drop the performance. This illustrates the importance of adapting features of novel classes. By AFD, we get consistent and essential performance improvement: 1.73% and 0.51% for 1-shot and 5-shot with mini-ImageNet trained feature extractor, 1.08% and 0.27% for 1-shot and 5-shot with tiered-ImageNet trained feature extractor. This showcases the powerful effects of AFD under cross-domain cases, compared with the simplicity lies in AFD.
6 CONCLUSION
We propose an finetuning on adaptive feature distribution to improve the novel class generalization for few-shot learning. And the performance improvement on both in-domain and cross-domain evaluation showcases the superior generalization brought by this simple yet effective method. With the proposed AFD method, we also address the importance of further understanding and analyzing the feature distribution of novel classes. | 1. What is the focus of the paper regarding few-shot learning?
2. What are the strengths of the proposed approach, particularly in terms of simplicity and performance improvement?
3. What are the weaknesses of the paper, such as the lack of comparison with whole feature fine-tuning and unclear application of DB-index measure?
4. Do you have any questions regarding the visualization of feature distribution and the effect of the normalization layer?
5. Are there any minor issues in the paper, such as incorrect terminology or unclear equations? | Review | Review
This paper focuses on improving novel class generalization of few-shot learning. This paper follows the meta-baseline proposed by Chen et al. (2020), which firstly learns features extractor with based classes then adopts the classifiers with the support set of novel categories. While the meta-baseline fixed the pre-trained features, the proposed Adaptive Feature Distribution (AFD) learns the element-wise scale of pre-trained features using the support sets of novel classes. Because only a single scale vector is fine-tuned, the number of additional parameters are low while obtaining high generalization ability of few-shot learning. The proposed method is effective. However, the paper has a large room for improvement.
Pros
The proposed method is simple; it only adds a scale vector for improving the feature vector.
The model selection with DB-index improves performances largely.
The performance improvements by AFD on the CUB dataset are relatively high.
Cons
The proposed method is not compared with the case when whole features are fine-tuned. The update of only features scale would be justified when it outperforms this case.
How to apply DB-index measure for model clustering is not clear. Clustering is not conducted in the proposed method.
If DB-index measure selects the best epoch is unknown. The graph of accuracies in different epochs with the selected epoch with DB-index would help to understand.
Figure2 shows the distribution of the feature of novel classes is not separated. How AFD improves this distribution also should be visualized.
The performance improvements by AFD on min-ImageNet and tiered-ImageNet are low. The improvements in 1-shot are only -0.03% and 0.04%, respectively, for these datasets.
It is not clear what the dot-product means in Table3. AFD uses this component, but it seems not explained before.
The effect of the normalization layer in Eq.(2) could be conducted in an ablation study.
Minor problem
P1. Abstract “reducing number of parameters”; however, the proposed method adds parameters compared with fixed pre-trained features.
P3. “subtract” features seems to be “extract” features
μ
and
σ
seem to be vectors. They should be bold.
P.3 “feature are regularized by following the normal distribution” . The normalization of features distribution to be zero-mean and the unit standard deviation is “standardization,” not “regularization”.
P5: Eqs.(11),(13). exp should be exp( ). What is
S
in Eq.(11) ? |
ICLR | Title
Improve Novel Class Generalization By Adaptive Feature Distribution for Few-Shot Learning
Abstract
In this work, we focus on improving the novel class generalization of few-shot learning. By addressing the difference between feature distributions of base and novel classes, we propose the adaptive feature distribution method which is to finetune one scale vector using the support set of novel classes. The scale vector is applied on the normalized feature distribution and by using one scale vector to reshape the feature space manifold, we obtain consistent performance improvement for both in-domain and cross-domain evaluations. By simply finetuning one scale vector using 5 images, we observe a 2.23% performance boost on 5-way 1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes. This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB.
1 INTRODUCTION
With the plethora of available large-scale data, deep learning has achieved significant advancements. However multiple factors such as high labelling costs, scarce availability of classes of interest or the expensive need for experts for label generation set limits of applying large-scale data. To address this challenge, the problem of few-shot learning was formulated which has received considerable attention in recent years Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle (2016); Hariharan & Girshick (2017).
For a supervised learning problem with data set (x1, y1), ...(xn, yn)(xi ∈ X feature space, yi ∈ Y label space), by using the hypothesis class(h(.;w)), we want to minimize l(h(x;w), y) on new samples. With the assumption that training samples and test samples are i.i.d from the same unknown distribution D over X ×Y , the problem is optimized over the Empirical Risk Minimization(ERM). For multi-class classification with deep neural network, the hypothesis class related to the scenario can be divided into two functionalities: the feature extractor Fθ(xi) parameterized by θ, the classifier C(· | w) for a given class weight vector w. Basically to achieve a good classification performance over the large-scale dataset, h(.;w) is expected to be highly invariant and this property empowers the feature extractor Fθ(xi) with good feature invariance ability if we consider variations that are generally in the objects such as shapes, lights and etc.
Few-shot learning proposes a great challenge as the estimation of the distribution is hard to achieve with a few samples. Meta-learning methods on few-shot learning lead a direction of adapting to a hypothesis class with few samples, which directly back-propagates the loss between testing set with the h(.;w) proposed with the training set. Recent work meta-Baseline Chen et al. (2020)proposed to conducts meta-training with a pre-trained feature extractor on base classes which leads to a large-margin performance improvement of meta-training. Moreover, they observe that during meta-training stage, models are better generalized on the base classes while evaluation performance on novel classes largely dropped.
The novel class generalization which is defined as evaluation performance on novel classes following Chen et al. (2020) is essential for improving few-shot learning into practice. Training of algorithms on few-shot learning are conducted with base classes which are relatively large-scale in the sense
of plenty number of classes with hundreds of images. Methods in metric-based learning Chen et al. (2019); Gidaris & Komodakis (2019); Wang et al. (2018); Gidaris & Komodakis (2018) and metalearning Chen et al. (2020) prove that training in this way benefits the capture of large variations which is crucial for discriminative features. However, as the feature extractor on base classes is trained under maximum likelihood, features are also trained to be invariant for discriminating these base classes, as shown in Fig.2. Then the evaluation on novel classes would suffer from the feature distribution difference between base and novel classes, and cross-domain between base and novel classes could enlarge this feature distribution difference. Objects(or images) in different domains carry different aspects of information which leads to different discriminative features or features in common among categories.
Attempts of improving novel class generalization include finetuning method proposed in Chen et al. (2019). In Chen et al. (2019), they proposed to finetune the novel class weights using the support set of novel classes with competitive results. However if feature distribution of novel classes suffers from scattering, even with a plenty of data finetuning the novel class weights without any optimization on the feature side is not promising for finding a good decision boundary, not to mention with only a few samples.
In our work, we propose the adaptive feature distribution to improve the novel class generalization. Following the idea of finetuning using a handful of samples, we apply a non-parametric distance first to construct the hypothesis class and then by only finetuning a scale vector which applied on the normalized feature distribution, we achieve the effects of adaptive feature distribution on novel classes.
Our Contributions: 1) We address the importance of further understanding the feature distribution for novel classes. Using DB-index which measures the quality of feature distributions for novel classes to select feature extractors, we observe a consistent performance boost on all three evaluation datasets. We believe introducing analysis on feature distributions and clustering quality of novel classes is informative to the community. 2) We propose to improve novel class generalization through adapting the feature distribution of novel classes. And by only finetuning one scale vector using support sets of novel classes, we showcase the supreme generalization of this method especially on cross-domain evaluations. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB. 3) This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning.
2 PRIOR ART
There have been many approaches to few-shot learning explored recently, namely are fast-adaptation methods Finn et al. (2017); Rusu et al. (2018); Sun et al. (2019); Chen et al. (2020), model optimization based methods Ravi & Larochelle (2016), metric learning based methods Vinyals et al. (2016); Snell et al. (2017); Ren et al. (2018); Sung et al. (2018); Guo & Cheung (2020); Li et al. (2020) and methods which use ridge regression and support vector machine Bertinetto et al. (2018); Lee et al.
(2019). There have also been studies focusing on discovering projective feature embeddings Simon et al. (2018; 2020). Recently, a few studies utilized a variety of techniques in machine learning towards few shot classification. Techniques like self-supervised training, self-training through semisupervised learning and model ensembles showed a boost result when applied on few-shot learning problem Gidaris et al. (2019); Dvornik et al. (2019); Li et al. (2019b). Modules were also invented to enhance feature discrimination Li et al. (2019a); Hou et al. (2019). Recently approaches have also explored combination with Graph Neural Networks Garcia & Bruna (2017); Kim et al. (2019).
3 ADAPTIVE FEATURE DISTRIBUTION: FINE-TUNING THE SCALE OF FEATURE DISTRIBUTION ON NOVEL CLASSES
In this section, we introduce how we realize the adaptive feature distribution with a learnable scale vector and the effects on novel class feature space by only finetuning the scale vector with a few samples.
The Few-Shot Problem Formulation. Evaluation datasets in few-shot learning are separated into base, validation and test classes. Base classes which is used in training involves a relatively large number of labelled training samples. And validation classes or test classes are treated as novel classes, which correspondingly used for validation and testing purpose. For few-shot learning scenarios, one episode is defined as K-way N -shot learning where K is the number of classes, N is the number of training images(support set) and K classes are firstly sampled from the novel classes; N samples in the support set as well as the query set(samples used for evaluating the episode performance) are sampled within each K classes. For one K-way N -shot episode, we use Sk and Qk to denote the support and query set accordingly for k ∈ K novel classes. We use a pre-trained feature extractor Fθ to subtract features. We use fi = Fθ(xi) to represent the feature for xi. We first add a feature normalization layer with a scale vector s:
f̄i = fi − µ σ ∗ s (1)
Where: µ = 1N ∑ i fi and σ = 1 N ∑ i(fi − µ)2.
In this layer, features from all training samples are first normalized in a way that values of every element on the feature vectors are regularized by following the normal distribution. A scale vector s is then multiplied with the normalized feature. s serves as the ”adaptive” part that by tuning the value of s, we are scaling the normalized feature distribution. s is flexible in the sense that every element on s scales up or down on every element of features and this in general leads to the reshape of feature space manifold. Then by fine-tuning s with classification loss on novel classes, we expect to the reshape of feature space manifold could fast adapt the features for novel classes especially on cross-domain cases.
In the fine-tuning stage, we first construct our evaluation metrics in an non-parametric way. We use average feature of the support set Sk as the class weight wk with a softmax loss:
Lf = − 1
N N∑ i=1 logPyi = − 1 N N∑ i=1 log exp zyi∑K k=1 exp zk
(2)
Where zj = w T j · f̄i = f̄Ti · 1
N ∑ x∈Sj f̄ (3)
By using this non-parametric metrics, we decrease the number of parameters to be trained in the fine-tuning stage while still follows the maximum likelihood estimation to predict the probability p(y|x). And this allows flexibility of fine-tuning the feature space with adaptive feature distribution. We analyze the gradient flow in the fine-tuning stage in the following.
The derivative ofzj with f̄i is: ∂zj ∂ f̄i = wj (4)
For an input xi, the derivative of zj with Lf is:
∂Lf ∂zj = { Pj − 1 j = yi Pj j 6= yi
(5)
∂Lf ∂ f̄i = (Pyi − 1)wyi + K∑ j 6=yi Pjwj (6)
Meanwhile as s is element-wisely multiplied with fi, the gradient at location c for s is(we omit the notation of location c to simplify the notation):
∂f̄i ∂s = fi − µ σ
(7)
Then we have the gradient for s at any location on s with sample xi as:
∇s = fi − µ σ [(Pyi − 1)wyi + K∑ j 6=yi Pjwj ] (8)
For fine-tuning only using K-way N-shot samples, Pyi ' 1(for 1-shot case, Pyi = 1) the gradient during training can be approximated as:
∇s = fi − µ σ K∑ j 6=yi Pjwj = fi − µ σ K∑ j 6=yi [Pj ∑ x∈Sj fx] (9)
To simplify the notation, we use the gradient for 1-shot case to conduct further discussion, which is:
∇s = fi − µ σ K∑ j 6=yi Pj fj − µ σ
(10)
By conducting the gradient descent, we have s = s−∇s. To give a direct impression of how this fine-tuning changes the feature space manifold, we illustrate the change on s brought by gradient descent intuitively. First of all, the normalization on f ensures that the value is ”soft bounded”, which will not cause the extreme values on the gradient. For some locations where elements are encoded ”common” information, values of fi and fj are similar. And in the opposite way, elements in other locations are encoded ”discriminative” information where values of fi and fj are largely different. In this case,∇s could be relatively large or negative which leads to scaling up the feature distribution at those locations. Then the difference between features are further enlarged correspondingly. In this case, the manifold of the feature space will fast adapt to the shape where distinguished parts are enlarged.
4 OVERALL FRAMEWORK
In this section, we introduce the overall framework that we conduct the few-shot classification problem.
4.1 TRAINING CLASSIFICATION ON BASE CLASSES
The model Fθ(x) that is trained on the base classes Kbase. To obtain a better feature invariance, we use the l2-normalized Softmax Chen et al. (2019); Ranjan et al. (2017); Wang et al. (2017); Qi et al. (2018) with cross entropy loss, which utilize softmax under the constraint of ‖wyi‖ 2 2 = 1 and ‖Fθ(xi)‖22 = 1:
LSM = − 1
Nbase Nbase∑ i=1 log expS cos(wTyi , Fθ(xi))∑Kbase k=1 expS cos(w T k , Fθ(xi))
(11)
4.2 EVALUATION ON NOVEL CLASSES
Given an K-way N -shot episode of few-shot classification, for each class k ∈ K we have a support set Sk = (x1, y1), ·, (xN , yN ) and a query set Qk = (x1, y1), ·, (xM , yM ). With the pretrained feature extractor Fθ(x), we follow the same metric of cosine distance in equation 11 when evaluating on novel classes; and the novel class weight wk is the average feature of the support set Sk Qi et al. (2018); Chen et al. (2020):
wk = 1
N ∑ x∈Sk Fθ(x) (12)
The predicted probability that x ∈ Q belongs to class k is:
p(y = k|x) = exp cos(w T k , Fθ(x))∑K
j=1 exp cos(w T j , Fθ(x))
(13)
4.3 FINE-TUNING SCALE VECTOR ON NORMALIZED FEATURE DISTRIBUTION.
For the fine-tuning part, we conduct experiments with data augmentation and without data augmentation separately. With data augmentation, when we construct our the non-parametric evaluation metrics in equation. 2 the average feature used for novel class weight are generated from samples without data augmentation while features as input to the evaluation metrics are from samples after data augmentation. By doing this, we ensures the minimum change of the novel class prototype(class weight) and the maximum of sample variations around the class prototype. The fine-tuning without data augmentation follows the methodology in Section 2.
4.4 MODEL SELECTION FOR NOVEL CLASSES.
After we train the classification on base classes, we come to the model selection of using which model as the feature extractor for novel classes. The feature extractor with the best classification accuracy or from the later epochs may not be a good choice. To obtain a high classification accuracy, features trained by supervised classification at the later stage of training may suffer the ”overfitting” to the seen classes. In other words, features would be projected precisely to directions of class weight vectors in order to get a high classification accuracy. By using these models as the feature extractor for novel classes, features of the novel classes could be separately projected onto the directions of the base classes which enlarge the scattering of that feature distribution indeed. Using the fewshot performance on validation set could be one choice, however as we are approaching the adaptive feature distribution, we consider the model selection from the perspective of measuring the quality of feature distribution. We use DB-index Davies & Bouldin (1979) as the criterion for model selection, which evaluates the clustering quality by considering the separation of the clusters and the tightness inside the clusters. And interestingly, we found that models with lower DB-index are generally models around the epoch after the first time of decreasing the learning rate. In our experiments, models with lower DB-index on validation set are selected.
5 EXPERIMENTAL VALIDATION
We evaluate the our adaptive feature distribution method in both in-domain case and cross-domain case. In-domain case is defined as the base and novel classes are from the same datsets and crossdomain case refers to the situation that base and novel classes are from different datasets and generally the datasets have domain difference.
5.1 EVALUATION DATASETS AND IMPLEMENTATION DETAILS
5.1.1 EVALUATION DATASETS
Dataset 1: mini-ImageNet Vinyals et al. (2016) is a standard benchmark for few-shot image classification benchmark, which consists of 100 randomly chosen classes from ILSVRC-2012 Russakovsky et al. (2015). And these classes are randomly split into 64, 16 and 20 classes for metatraining, meta-validation and meta-test set respectively. Each class contains 600 images of size 84× 84. We use the common split used in Lee et al. (2019). Dataset 2: tiered-ImageNet Ren et al. (2018) is a larger subset of ILSVRC-2012 Russakovsky et al. (2015), composed of 608 classes which are split into meta-training, meta-validation and meta-testing set with 351, 97 and 160 classes respectively. All images are of the size 84× 84. Dataset 3: CUB-200-2011 Wah et al. (2011) contains 200 classes and 11,788 images in total. Following the evaluation protocol of Hilliard et al. (2018), the dataset is split into 100 base, 50 validation and 50 novel classes. We use the same splits as Chen et al. (2019) for testing. This dataset serves as the test set for the cross-domain evaluation.
5.1.2 ARCHITECTURE AND TRAINING DETAILS
Baseline Network Architecture. We utilize the ResNet-12 network architecture following Lee et al. (2019) to train the baseline and backbone classification model. However in contrast to Lee et al. (2019), we use a global average pooling after the last residual block following which the feature length becomes 640 and the feature layer is followed by a 1-d batchnorm layer without affine.
Training hyperparameters. All networks were trained with SGD along with Nesterov momentum of 0.9 and weight decay of 5 × 10−4. The initial learning rate is set as 0.1 which was decreased by a factor of 10 every 50 epochs for a total of 150 epochs. The batch size was kept at 256. Data argumentation was applied for baseline classification following Lee et al. (2019), which included horizontal flip, random crop, and color (brightness, contrast, and saturation) jitter.
Fine Tuning on the Novel Class Support Set. We finetune on the novel class training set using Adam with learning rate 5 × 10−3 by back-propagating the gradient from the whole batch, and early stop in this case is crucial that we finetune 3 epochs for 1-shot and 5 epochs for 5-shot case in cross-domain cases and 3 epochs for both 1-shot and 5-shot in in-domain cases. The scale vector is initialized as 1.
In our experiments, Baseline refers to the pre-trained feature extractor of the last epoch for training; Baseline* refers to the pre-trained feature extractor selected using density based clustering index. And we use Baseline* as the feature extractor for all our finetuning experiments.
5.2 COMPARING PERFORMANCE ON IN-DOMAIN AND CROSS-DOMAIN CASES
Performance of model selection are consistent. We observe that using the DB-index to select feature extractors gain consistent performance improvement among all three evaluation datasets. And this could serve as a good sign of studying the feature transfer-ability from the perspective of feature distribution.
AFD Shows Improvement on In-Domain Evaluations. Shown in Table.1, AFD improves the performance on 5-shot with 0.22% and 0.2% separately for miniImagenet and tieredImagenet. One thing to notice is that the performance of our Baseline* already surpass performance of most works. AFD still leads to performance improvement while using a well presumed feature extractor.
AFD shows superior generalization to cross-domain evaluations. Shown in Table.2, by simply finetuning using 5 images for 1-shot case and 25 images for 5-shot case, we observe 1.73% and 1.08% performance improvement from statistical results among 2000 episodes.
5.3 ABLATION STUDIES ON FINE-TUNING
The results of ablation studies are shown in Table.3.
Effects of Applying Data Augmentation during Finetuning: The major obstacle of few-shot learning is the lack of samples which is essential for improving the novel class generalization. Although we only train 3 epochs, the effects of data augmentation are still obvious. Only for 1-shot case with miniImagenet-trained feature extractor the performance is worse than without using data augmentation. This could be caused with the reason that the feature extractor is trained with a relatively small data, features abstracted then are not stable to optimize which is serve when adding data augmentation with only 5 training samples. Otherwise, we observe performance improvement of 0.47% for 5-shot with miniImagenet-trained feature extractor and 0.18%, 0.65% for 1-shot and 5-shot with tieredImagenet-trained feature extractor. As we only use the basically simple data aug-
mentation strategy, further work to explore the effective data augmentation for finetuning on novel classes are promising.
Effects of Different Feature Extractor: Firstly by using a feature extractor trained with larger dataset, the performance on cross-domain cases boost a lot which indicates the importance of a good feature embedding. And for AFD, the performance improvement on 1-shot are 1.73% and 1.08% for miniImagenet model and tieredImagenet model; on 5-shot are 0.51% and 0.27% for miniImagenet model and tieredImagenet model. This illustrates that AFD are able to fast adapt features especially when the quality of feature embedding is not good. However, with a better feature extractor allows better improvement of using data augmentation in AFD as discussed above.
Effects of Different Metrics: We compare the results of using different metrics(dot-product and cosine metrics with scale Wang et al. (2017)) in our non-parametric evaluation for fine-tuning. The performance is almost the same. As different metrics affect how well can we achive the predicted probability and in our case, as illstrated in Section 2, the predicted probability is around 1 already. Then different metrics serve similar efforts of adapting features for novel classes.
The Importance of Fine-tuning Features: We compare AFD with only finetuning the novel class weight method. For finetuning the novel class weight, we use average features from support set as the weight initialization and the hyper-parameters settings are the same as mentioned above. We observe performance drop by only finetuning the novel class weight, which are 0.66%, 0.92% for 1-shot and 5-shot with mini-ImageNet trained feature extractor, 0.42%,0.07% for 1-shot and 5-shot with tiered-ImageNet trained feature extractor. For cross-domain cases, features for novel classes are not well discriminative and constrained for the same class. As features are not optimized, only finetuning the novel class weights linear relating to features will actually drop the performance. This illustrates the importance of adapting features of novel classes. By AFD, we get consistent and essential performance improvement: 1.73% and 0.51% for 1-shot and 5-shot with mini-ImageNet trained feature extractor, 1.08% and 0.27% for 1-shot and 5-shot with tiered-ImageNet trained feature extractor. This showcases the powerful effects of AFD under cross-domain cases, compared with the simplicity lies in AFD.
6 CONCLUSION
We propose an finetuning on adaptive feature distribution to improve the novel class generalization for few-shot learning. And the performance improvement on both in-domain and cross-domain evaluation showcases the superior generalization brought by this simple yet effective method. With the proposed AFD method, we also address the importance of further understanding and analyzing the feature distribution of novel classes. | 1. What is the focus of the paper regarding few-shot learning classification?
2. What are the strengths of the proposed approach, particularly in adapting features on new categories?
3. What are the weaknesses of the paper, especially in terms of writing quality and experiment comparisons?
4. Do you have any concerns regarding the consideration of cross-domain datasets?
5. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
Summary
This paper propose a simple method for fine-tuning few-shot learning classification. The proposed method, AFD, is built on top of metric-based methods and consists of modulating the (normalized) features at test time using the support set. The authors report results on what they call "in-domain" (miniImageNet and tieredImageNet) and "cross-domain dataset" (evaluating on CUB dataset).
Strengths
The idea of adapting features on new categories on the fly is interesting and promising for few-shot learning (as well as standard supervised training). Instead of adapting all layers, the proposed approach only learn a scale vector (with, obviously, much less parameters) that modulates feature distribution of each novel class.
Weakness/ Comments
This paper is not very well written and difficult to follow. It contains more typos/grammatical errors one can state here. This make the paper difficult to understand. I would highly recommend the authors to fully rewrite the paper.
Moreover, many current methods (that outperform the proposed approach) are missing on the experimental section. For example, Dhillon el al. (ICLR20) propose a somehow similar idea and achieve better performances. Other methods like Hou et al. (NeurIPS19) and Rodriguez et al. (ECCV20) also outperform the proposed approach but are not mentioned on this manuscript.
I don't really understand why the authors consider experiments in CUB as cross-domain datasets. Is it because the model was trained on ImageNet and evaluated on CUB? still all the new images do come from the same domain. A much more interesting dataset to try the model would be Meta-Dataset (Triantafillou et al.. ICLR20).
Better ablation to understand why and when the proposed method work would be more helpful.
Rating
Based on the above, I rate this paper as a Strong Reject. |
ICLR | Title
Improve Novel Class Generalization By Adaptive Feature Distribution for Few-Shot Learning
Abstract
In this work, we focus on improving the novel class generalization of few-shot learning. By addressing the difference between feature distributions of base and novel classes, we propose the adaptive feature distribution method which is to finetune one scale vector using the support set of novel classes. The scale vector is applied on the normalized feature distribution and by using one scale vector to reshape the feature space manifold, we obtain consistent performance improvement for both in-domain and cross-domain evaluations. By simply finetuning one scale vector using 5 images, we observe a 2.23% performance boost on 5-way 1-shot cross-domain evaluation with CUB over statistics results of 2000 episodes. This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB.
1 INTRODUCTION
With the plethora of available large-scale data, deep learning has achieved significant advancements. However multiple factors such as high labelling costs, scarce availability of classes of interest or the expensive need for experts for label generation set limits of applying large-scale data. To address this challenge, the problem of few-shot learning was formulated which has received considerable attention in recent years Vinyals et al. (2016); Snell et al. (2017); Finn et al. (2017); Ravi & Larochelle (2016); Hariharan & Girshick (2017).
For a supervised learning problem with data set (x1, y1), ...(xn, yn)(xi ∈ X feature space, yi ∈ Y label space), by using the hypothesis class(h(.;w)), we want to minimize l(h(x;w), y) on new samples. With the assumption that training samples and test samples are i.i.d from the same unknown distribution D over X ×Y , the problem is optimized over the Empirical Risk Minimization(ERM). For multi-class classification with deep neural network, the hypothesis class related to the scenario can be divided into two functionalities: the feature extractor Fθ(xi) parameterized by θ, the classifier C(· | w) for a given class weight vector w. Basically to achieve a good classification performance over the large-scale dataset, h(.;w) is expected to be highly invariant and this property empowers the feature extractor Fθ(xi) with good feature invariance ability if we consider variations that are generally in the objects such as shapes, lights and etc.
Few-shot learning proposes a great challenge as the estimation of the distribution is hard to achieve with a few samples. Meta-learning methods on few-shot learning lead a direction of adapting to a hypothesis class with few samples, which directly back-propagates the loss between testing set with the h(.;w) proposed with the training set. Recent work meta-Baseline Chen et al. (2020)proposed to conducts meta-training with a pre-trained feature extractor on base classes which leads to a large-margin performance improvement of meta-training. Moreover, they observe that during meta-training stage, models are better generalized on the base classes while evaluation performance on novel classes largely dropped.
The novel class generalization which is defined as evaluation performance on novel classes following Chen et al. (2020) is essential for improving few-shot learning into practice. Training of algorithms on few-shot learning are conducted with base classes which are relatively large-scale in the sense
of plenty number of classes with hundreds of images. Methods in metric-based learning Chen et al. (2019); Gidaris & Komodakis (2019); Wang et al. (2018); Gidaris & Komodakis (2018) and metalearning Chen et al. (2020) prove that training in this way benefits the capture of large variations which is crucial for discriminative features. However, as the feature extractor on base classes is trained under maximum likelihood, features are also trained to be invariant for discriminating these base classes, as shown in Fig.2. Then the evaluation on novel classes would suffer from the feature distribution difference between base and novel classes, and cross-domain between base and novel classes could enlarge this feature distribution difference. Objects(or images) in different domains carry different aspects of information which leads to different discriminative features or features in common among categories.
Attempts of improving novel class generalization include finetuning method proposed in Chen et al. (2019). In Chen et al. (2019), they proposed to finetune the novel class weights using the support set of novel classes with competitive results. However if feature distribution of novel classes suffers from scattering, even with a plenty of data finetuning the novel class weights without any optimization on the feature side is not promising for finding a good decision boundary, not to mention with only a few samples.
In our work, we propose the adaptive feature distribution to improve the novel class generalization. Following the idea of finetuning using a handful of samples, we apply a non-parametric distance first to construct the hypothesis class and then by only finetuning a scale vector which applied on the normalized feature distribution, we achieve the effects of adaptive feature distribution on novel classes.
Our Contributions: 1) We address the importance of further understanding the feature distribution for novel classes. Using DB-index which measures the quality of feature distributions for novel classes to select feature extractors, we observe a consistent performance boost on all three evaluation datasets. We believe introducing analysis on feature distributions and clustering quality of novel classes is informative to the community. 2) We propose to improve novel class generalization through adapting the feature distribution of novel classes. And by only finetuning one scale vector using support sets of novel classes, we showcase the supreme generalization of this method especially on cross-domain evaluations. We achieve the state-of-the-art performance on mini-Imagenet, tiered-Imagenet as well as cross-domain evaluation on CUB. 3) This approach is simple yet effective. By just finetuning a single scale vector we provide a solution of reducing number of parameters while still obtain generalization ability for few-shot learning.
2 PRIOR ART
There have been many approaches to few-shot learning explored recently, namely are fast-adaptation methods Finn et al. (2017); Rusu et al. (2018); Sun et al. (2019); Chen et al. (2020), model optimization based methods Ravi & Larochelle (2016), metric learning based methods Vinyals et al. (2016); Snell et al. (2017); Ren et al. (2018); Sung et al. (2018); Guo & Cheung (2020); Li et al. (2020) and methods which use ridge regression and support vector machine Bertinetto et al. (2018); Lee et al.
(2019). There have also been studies focusing on discovering projective feature embeddings Simon et al. (2018; 2020). Recently, a few studies utilized a variety of techniques in machine learning towards few shot classification. Techniques like self-supervised training, self-training through semisupervised learning and model ensembles showed a boost result when applied on few-shot learning problem Gidaris et al. (2019); Dvornik et al. (2019); Li et al. (2019b). Modules were also invented to enhance feature discrimination Li et al. (2019a); Hou et al. (2019). Recently approaches have also explored combination with Graph Neural Networks Garcia & Bruna (2017); Kim et al. (2019).
3 ADAPTIVE FEATURE DISTRIBUTION: FINE-TUNING THE SCALE OF FEATURE DISTRIBUTION ON NOVEL CLASSES
In this section, we introduce how we realize the adaptive feature distribution with a learnable scale vector and the effects on novel class feature space by only finetuning the scale vector with a few samples.
The Few-Shot Problem Formulation. Evaluation datasets in few-shot learning are separated into base, validation and test classes. Base classes which is used in training involves a relatively large number of labelled training samples. And validation classes or test classes are treated as novel classes, which correspondingly used for validation and testing purpose. For few-shot learning scenarios, one episode is defined as K-way N -shot learning where K is the number of classes, N is the number of training images(support set) and K classes are firstly sampled from the novel classes; N samples in the support set as well as the query set(samples used for evaluating the episode performance) are sampled within each K classes. For one K-way N -shot episode, we use Sk and Qk to denote the support and query set accordingly for k ∈ K novel classes. We use a pre-trained feature extractor Fθ to subtract features. We use fi = Fθ(xi) to represent the feature for xi. We first add a feature normalization layer with a scale vector s:
f̄i = fi − µ σ ∗ s (1)
Where: µ = 1N ∑ i fi and σ = 1 N ∑ i(fi − µ)2.
In this layer, features from all training samples are first normalized in a way that values of every element on the feature vectors are regularized by following the normal distribution. A scale vector s is then multiplied with the normalized feature. s serves as the ”adaptive” part that by tuning the value of s, we are scaling the normalized feature distribution. s is flexible in the sense that every element on s scales up or down on every element of features and this in general leads to the reshape of feature space manifold. Then by fine-tuning s with classification loss on novel classes, we expect to the reshape of feature space manifold could fast adapt the features for novel classes especially on cross-domain cases.
In the fine-tuning stage, we first construct our evaluation metrics in an non-parametric way. We use average feature of the support set Sk as the class weight wk with a softmax loss:
Lf = − 1
N N∑ i=1 logPyi = − 1 N N∑ i=1 log exp zyi∑K k=1 exp zk
(2)
Where zj = w T j · f̄i = f̄Ti · 1
N ∑ x∈Sj f̄ (3)
By using this non-parametric metrics, we decrease the number of parameters to be trained in the fine-tuning stage while still follows the maximum likelihood estimation to predict the probability p(y|x). And this allows flexibility of fine-tuning the feature space with adaptive feature distribution. We analyze the gradient flow in the fine-tuning stage in the following.
The derivative ofzj with f̄i is: ∂zj ∂ f̄i = wj (4)
For an input xi, the derivative of zj with Lf is:
∂Lf ∂zj = { Pj − 1 j = yi Pj j 6= yi
(5)
∂Lf ∂ f̄i = (Pyi − 1)wyi + K∑ j 6=yi Pjwj (6)
Meanwhile as s is element-wisely multiplied with fi, the gradient at location c for s is(we omit the notation of location c to simplify the notation):
∂f̄i ∂s = fi − µ σ
(7)
Then we have the gradient for s at any location on s with sample xi as:
∇s = fi − µ σ [(Pyi − 1)wyi + K∑ j 6=yi Pjwj ] (8)
For fine-tuning only using K-way N-shot samples, Pyi ' 1(for 1-shot case, Pyi = 1) the gradient during training can be approximated as:
∇s = fi − µ σ K∑ j 6=yi Pjwj = fi − µ σ K∑ j 6=yi [Pj ∑ x∈Sj fx] (9)
To simplify the notation, we use the gradient for 1-shot case to conduct further discussion, which is:
∇s = fi − µ σ K∑ j 6=yi Pj fj − µ σ
(10)
By conducting the gradient descent, we have s = s−∇s. To give a direct impression of how this fine-tuning changes the feature space manifold, we illustrate the change on s brought by gradient descent intuitively. First of all, the normalization on f ensures that the value is ”soft bounded”, which will not cause the extreme values on the gradient. For some locations where elements are encoded ”common” information, values of fi and fj are similar. And in the opposite way, elements in other locations are encoded ”discriminative” information where values of fi and fj are largely different. In this case,∇s could be relatively large or negative which leads to scaling up the feature distribution at those locations. Then the difference between features are further enlarged correspondingly. In this case, the manifold of the feature space will fast adapt to the shape where distinguished parts are enlarged.
4 OVERALL FRAMEWORK
In this section, we introduce the overall framework that we conduct the few-shot classification problem.
4.1 TRAINING CLASSIFICATION ON BASE CLASSES
The model Fθ(x) that is trained on the base classes Kbase. To obtain a better feature invariance, we use the l2-normalized Softmax Chen et al. (2019); Ranjan et al. (2017); Wang et al. (2017); Qi et al. (2018) with cross entropy loss, which utilize softmax under the constraint of ‖wyi‖ 2 2 = 1 and ‖Fθ(xi)‖22 = 1:
LSM = − 1
Nbase Nbase∑ i=1 log expS cos(wTyi , Fθ(xi))∑Kbase k=1 expS cos(w T k , Fθ(xi))
(11)
4.2 EVALUATION ON NOVEL CLASSES
Given an K-way N -shot episode of few-shot classification, for each class k ∈ K we have a support set Sk = (x1, y1), ·, (xN , yN ) and a query set Qk = (x1, y1), ·, (xM , yM ). With the pretrained feature extractor Fθ(x), we follow the same metric of cosine distance in equation 11 when evaluating on novel classes; and the novel class weight wk is the average feature of the support set Sk Qi et al. (2018); Chen et al. (2020):
wk = 1
N ∑ x∈Sk Fθ(x) (12)
The predicted probability that x ∈ Q belongs to class k is:
p(y = k|x) = exp cos(w T k , Fθ(x))∑K
j=1 exp cos(w T j , Fθ(x))
(13)
4.3 FINE-TUNING SCALE VECTOR ON NORMALIZED FEATURE DISTRIBUTION.
For the fine-tuning part, we conduct experiments with data augmentation and without data augmentation separately. With data augmentation, when we construct our the non-parametric evaluation metrics in equation. 2 the average feature used for novel class weight are generated from samples without data augmentation while features as input to the evaluation metrics are from samples after data augmentation. By doing this, we ensures the minimum change of the novel class prototype(class weight) and the maximum of sample variations around the class prototype. The fine-tuning without data augmentation follows the methodology in Section 2.
4.4 MODEL SELECTION FOR NOVEL CLASSES.
After we train the classification on base classes, we come to the model selection of using which model as the feature extractor for novel classes. The feature extractor with the best classification accuracy or from the later epochs may not be a good choice. To obtain a high classification accuracy, features trained by supervised classification at the later stage of training may suffer the ”overfitting” to the seen classes. In other words, features would be projected precisely to directions of class weight vectors in order to get a high classification accuracy. By using these models as the feature extractor for novel classes, features of the novel classes could be separately projected onto the directions of the base classes which enlarge the scattering of that feature distribution indeed. Using the fewshot performance on validation set could be one choice, however as we are approaching the adaptive feature distribution, we consider the model selection from the perspective of measuring the quality of feature distribution. We use DB-index Davies & Bouldin (1979) as the criterion for model selection, which evaluates the clustering quality by considering the separation of the clusters and the tightness inside the clusters. And interestingly, we found that models with lower DB-index are generally models around the epoch after the first time of decreasing the learning rate. In our experiments, models with lower DB-index on validation set are selected.
5 EXPERIMENTAL VALIDATION
We evaluate the our adaptive feature distribution method in both in-domain case and cross-domain case. In-domain case is defined as the base and novel classes are from the same datsets and crossdomain case refers to the situation that base and novel classes are from different datasets and generally the datasets have domain difference.
5.1 EVALUATION DATASETS AND IMPLEMENTATION DETAILS
5.1.1 EVALUATION DATASETS
Dataset 1: mini-ImageNet Vinyals et al. (2016) is a standard benchmark for few-shot image classification benchmark, which consists of 100 randomly chosen classes from ILSVRC-2012 Russakovsky et al. (2015). And these classes are randomly split into 64, 16 and 20 classes for metatraining, meta-validation and meta-test set respectively. Each class contains 600 images of size 84× 84. We use the common split used in Lee et al. (2019). Dataset 2: tiered-ImageNet Ren et al. (2018) is a larger subset of ILSVRC-2012 Russakovsky et al. (2015), composed of 608 classes which are split into meta-training, meta-validation and meta-testing set with 351, 97 and 160 classes respectively. All images are of the size 84× 84. Dataset 3: CUB-200-2011 Wah et al. (2011) contains 200 classes and 11,788 images in total. Following the evaluation protocol of Hilliard et al. (2018), the dataset is split into 100 base, 50 validation and 50 novel classes. We use the same splits as Chen et al. (2019) for testing. This dataset serves as the test set for the cross-domain evaluation.
5.1.2 ARCHITECTURE AND TRAINING DETAILS
Baseline Network Architecture. We utilize the ResNet-12 network architecture following Lee et al. (2019) to train the baseline and backbone classification model. However in contrast to Lee et al. (2019), we use a global average pooling after the last residual block following which the feature length becomes 640 and the feature layer is followed by a 1-d batchnorm layer without affine.
Training hyperparameters. All networks were trained with SGD along with Nesterov momentum of 0.9 and weight decay of 5 × 10−4. The initial learning rate is set as 0.1 which was decreased by a factor of 10 every 50 epochs for a total of 150 epochs. The batch size was kept at 256. Data argumentation was applied for baseline classification following Lee et al. (2019), which included horizontal flip, random crop, and color (brightness, contrast, and saturation) jitter.
Fine Tuning on the Novel Class Support Set. We finetune on the novel class training set using Adam with learning rate 5 × 10−3 by back-propagating the gradient from the whole batch, and early stop in this case is crucial that we finetune 3 epochs for 1-shot and 5 epochs for 5-shot case in cross-domain cases and 3 epochs for both 1-shot and 5-shot in in-domain cases. The scale vector is initialized as 1.
In our experiments, Baseline refers to the pre-trained feature extractor of the last epoch for training; Baseline* refers to the pre-trained feature extractor selected using density based clustering index. And we use Baseline* as the feature extractor for all our finetuning experiments.
5.2 COMPARING PERFORMANCE ON IN-DOMAIN AND CROSS-DOMAIN CASES
Performance of model selection are consistent. We observe that using the DB-index to select feature extractors gain consistent performance improvement among all three evaluation datasets. And this could serve as a good sign of studying the feature transfer-ability from the perspective of feature distribution.
AFD Shows Improvement on In-Domain Evaluations. Shown in Table.1, AFD improves the performance on 5-shot with 0.22% and 0.2% separately for miniImagenet and tieredImagenet. One thing to notice is that the performance of our Baseline* already surpass performance of most works. AFD still leads to performance improvement while using a well presumed feature extractor.
AFD shows superior generalization to cross-domain evaluations. Shown in Table.2, by simply finetuning using 5 images for 1-shot case and 25 images for 5-shot case, we observe 1.73% and 1.08% performance improvement from statistical results among 2000 episodes.
5.3 ABLATION STUDIES ON FINE-TUNING
The results of ablation studies are shown in Table.3.
Effects of Applying Data Augmentation during Finetuning: The major obstacle of few-shot learning is the lack of samples which is essential for improving the novel class generalization. Although we only train 3 epochs, the effects of data augmentation are still obvious. Only for 1-shot case with miniImagenet-trained feature extractor the performance is worse than without using data augmentation. This could be caused with the reason that the feature extractor is trained with a relatively small data, features abstracted then are not stable to optimize which is serve when adding data augmentation with only 5 training samples. Otherwise, we observe performance improvement of 0.47% for 5-shot with miniImagenet-trained feature extractor and 0.18%, 0.65% for 1-shot and 5-shot with tieredImagenet-trained feature extractor. As we only use the basically simple data aug-
mentation strategy, further work to explore the effective data augmentation for finetuning on novel classes are promising.
Effects of Different Feature Extractor: Firstly by using a feature extractor trained with larger dataset, the performance on cross-domain cases boost a lot which indicates the importance of a good feature embedding. And for AFD, the performance improvement on 1-shot are 1.73% and 1.08% for miniImagenet model and tieredImagenet model; on 5-shot are 0.51% and 0.27% for miniImagenet model and tieredImagenet model. This illustrates that AFD are able to fast adapt features especially when the quality of feature embedding is not good. However, with a better feature extractor allows better improvement of using data augmentation in AFD as discussed above.
Effects of Different Metrics: We compare the results of using different metrics(dot-product and cosine metrics with scale Wang et al. (2017)) in our non-parametric evaluation for fine-tuning. The performance is almost the same. As different metrics affect how well can we achive the predicted probability and in our case, as illstrated in Section 2, the predicted probability is around 1 already. Then different metrics serve similar efforts of adapting features for novel classes.
The Importance of Fine-tuning Features: We compare AFD with only finetuning the novel class weight method. For finetuning the novel class weight, we use average features from support set as the weight initialization and the hyper-parameters settings are the same as mentioned above. We observe performance drop by only finetuning the novel class weight, which are 0.66%, 0.92% for 1-shot and 5-shot with mini-ImageNet trained feature extractor, 0.42%,0.07% for 1-shot and 5-shot with tiered-ImageNet trained feature extractor. For cross-domain cases, features for novel classes are not well discriminative and constrained for the same class. As features are not optimized, only finetuning the novel class weights linear relating to features will actually drop the performance. This illustrates the importance of adapting features of novel classes. By AFD, we get consistent and essential performance improvement: 1.73% and 0.51% for 1-shot and 5-shot with mini-ImageNet trained feature extractor, 1.08% and 0.27% for 1-shot and 5-shot with tiered-ImageNet trained feature extractor. This showcases the powerful effects of AFD under cross-domain cases, compared with the simplicity lies in AFD.
6 CONCLUSION
We propose an finetuning on adaptive feature distribution to improve the novel class generalization for few-shot learning. And the performance improvement on both in-domain and cross-domain evaluation showcases the superior generalization brought by this simple yet effective method. With the proposed AFD method, we also address the importance of further understanding and analyzing the feature distribution of novel classes. | 1. What is the main contribution of the paper regarding few-shot classification?
2. How does the proposed method, AFD, improve the baseline performance?
3. Why did the authors choose to multiply the query embeddings element-wise by a vector s instead of fine-tuning the entire network?
4. How does the DB-Index metric differ from validation loss for model selection?
5. Can you explain the intuition behind adapting the feature distribution of novel classes to better align with the distribution learned by the feature extractor?
6. How do the experimental results of AFD compare to other recent works in few-shot learning, such as those mentioned in the review (Hou et al., Ye et al., Tseng et al., Liu et al., and Dvornik et al.)?
7. What are some limitations or potential drawbacks of the AFD approach?
8. Are there any plans to extend the comparison with previous state-of-the-art methods or provide more insight into why the method works well?
9. Could the authors clarify what distribution they refer to in the third paragraph of the introduction?
10. How does the element-wise multiplication by s differ from other embedding adaptation methods, such as cross attention networks (Hou et al.) or feature-wise transformation (Tseng et al.)? | Review | Review
Summary
The authors propose a method (AFD) to alleviate the generalization gap between base and novel classes in few-shot classification. The method consists in adapting the query embeddings to the novel classes by multiplying them element-wise by a vector (named s). This vector s is found episode-wise by fine-tuning on the support set. The intuition (as shown in Figure 2), is that this scalar will adapt the feature distribution of the novel classes to better align with the distribution learned by the feature extractor. As an additional contribution, the authors propose to use a cluster quality metric (DB-Index) instead of validation loss for model selection. The authors report better results than previous works they compare with on mini-imagenet, tiered-imagenet, and cross-domain evaluation on CUB.
Overall Review
The main strength of this work is that the proposed approach is simple yet improves the baseline performance of the models. However, the problem of feature adaptation for few-shot learning has just been addressed by multiple works [1,2,3,4,5], obtaining better performances than this submission, and more complete experimental set-ups (meta-dataset, transfer from multiple domains, etc.). Moreover, these works have not been referenced nor compared with AFD. This fact, and the other issues exposed in "Weaknesses" below, clearly incline the balance towards rejection.
[1] Hou, Ruibing, et al. "Cross attention network for few-shot classification." NeurIPS. 2019.
[2] Ye, Han-Jia, et al. "Few-shot learning via embedding adaptation with set-to-set functions." CVPR. 2020.
[3] Tseng, Hung-Yu, et al. "Cross-domain few-shot classification via learned feature-wise transformation." ICLR. 2020.
[4] Liu, Jinlu, Liang Song, and Yongqiang Qin. "Prototype Rectification for Few-Shot Learning." ECCV. 2020.
[5] Dvornik, Nikita, Cordelia Schmid, and Julien Mairal. "Selecting Relevant Features from a Multi-domain Representation for Few-shot Classification." ECCV. 2020.
Strengths
The proposed method is simple yet improves the baseline performance.
Weaknesses
The authors have missed comparing with highly relevant works in the literature [1,2,3,4,5].
The clarity could be improved:
2.1. (Abstract and Section 1, last paragraph.): "we provide a solution of reducing number of parameters". What parameters are being reduced? You mean that since s is smaller than the network, the amount of parameters to fine-tune is smaller? In that case, methods that do not fine-tune on the novel classes use even less?
2.2. The message of "finetune one scale vector" is repeated three times in the abstract, becoming a bit redundant.
2.3.
l
is not defined in the second paragraph of the introduction.
2.4. (third paragraph, Intro.) "estimation of the distribution is hard to achieve". Could the authors clarify what distribution they refer to?
2.5. (Last paragraph of the Introduction) "all three evaluation datasets". The datasets have not been introduced yet except in the abstract.
2.6.
x
does not appear in equation (5) but it is referenced just above.
2.7. The main contribution is the element-wise multiplication by
s
. However, the submission includes 13 equations, which could mislead the reader into thinking that the method is overly complex. I would suggest the authors to simplify the text, reducing the number of equations, and using that space to either provide more insight on why their method works well, or to extend the comparison with previous state of the art.
2.8. Section 4.3. "features as input to the evaluation metrics". Do you refer to query features?
2.9. Introduction. "h is expected to be highly invariant and this property empowers .... lights and etc.". I understand you mean that invariance to noise is good, but I am not sure why you say it in this context. Could you please rephrase and possibly split in two sentences?
Section 5.1. "Baseline refers to the pre-trained feature extractor of the last epoch for training". You could select the feature extractor from the validation loss instead of the last epoch. Why do you compare last epoch vs cluster index?
Table 3. What is the performance of AFD with cosine? what is the performance of AFD without data-aug? There are two 50.99, is it right?
There are many typos that hamper readability. For instance:
5.1. Introduction:
5.1.1. "label generation set limits" -> remove set
5.1.2. Citations in general. When the authors of the referenced paper are not part of the sentence, you should use \citep{} which puts them between parenthesis. For instance: 'Moreover, Chen et al. proposed blabla'. (\citet). 'Moreover, metric-learning approaches (Chen et al.) propose...' (\citep).
5.1.3. "to conducts" -> "to conduct"
5.1.4. "the adaptive feature distribution" -> "to adapt a feature ..."
5.2. Section 3:
5.2.1. "which correspondingly used" -> "which are...."
5.2.2. "to subtract features" -> "to extract features"
5.2.3. "by using this non-parametric metrics" -> "... metric"
5.2.4. "when we construct our the non-parametric" -> "...our non-parametric"
5.3. Section 5.3:
5.3.1. "which is serve" -> ?? |
ICLR | Title
Accelerating Guided Diffusion Sampling with Splitting Numerical Methods
Abstract
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. However, one drawback of diffusion models, whether they are guided or unguided, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical highorder numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution.
1 INTRODUCTION
A family of generative models known as diffusion models has recently gained a lot of attention with state-of-the-art image generation quality (Dhariwal & Nichol, 2021). Guided diffusion is an approach for controlling the output of a trained diffusion model for conditional generation tasks without retraining its network. By engineering a task-specific conditional function and modifying only the sampling procedure, guided diffusion models can be used in a variety of applications, such as class-conditional image generation (Dhariwal & Nichol, 2021; Kawar et al., 2022), text-to-image generation (Nichol et al., 2022), image-to-image translation (Zhao et al., 2022), inpainting (Chung et al., 2022a), colorization (Song et al., 2020b), image composition (Sasaki et al., 2021), adversarial purification (Wang et al., 2022; Wu et al., 2022) and super-resolution (Choi et al., 2021).
One common drawback of both guided and regular “unguided” diffusion models is their slow sampling processes, usually requiring hundreds of iterations to produce a single image. Recent speedup attempts include improving the noise schedule (Nichol & Dhariwal, 2021; Watson et al., 2021), redefining the diffusion process to be non-Markovian, thereby allowing a deterministic sampling process Song et al. (2020a), network distillation that teaches a student model to simulate multiple sampling steps of a teacher model Salimans & Ho (2022); Luhman & Luhman (2021), among others. Song et al. (2020a) show how each sampling step can be expressed as a first-order numerical step of an ordinary differential equation (ODE). Similarly, Song et al. (2020b) express the sampling of a score-based model as solving a stochastic differential equation (SDE). By regarding the sampling process as an ODE/SDE, many high-order numerical methods have been suggested, such as Liu et al. (2022), Zhang & Chen (2022), and Zhang et al. (2022) with impressive results on unguided diffusion models. However, when applied to guided diffusion models, these methods produce surprisingly poor results (see Figure 1)—given a few number of steps, those high-order numerical methods actually perform worse than low-order methods.
Guided sampling differs from the unguided one by the addition of the gradients of the conditional function to its sampling equation. The observed performance decline thus suggests that classical high-order methods may not be suitable for the conditional function and, consequently, the guided
sampling equation as a whole. Our paper tests this hypothesis and presents an approach to accelerating guided diffusion sampling. The key idea is to use an operator splitting method to split the less well-behaved conditional function term from the standard diffusion term and solve them separately. This approach not only allows re-utilizing the successful high-order methods on the diffusion term but also provides us with options to combine different specialized methods for each term to maximize performance. Note that splitting methods have also been explored by Dockhorn et al. (2022) to solve unguided diffusion SDEs, but our work focuses on accelerating guided diffusion ODEs.
Our design process includes comparing different splitting methods and numerical methods for each split term. When tested on ImageNet, our approach achieves the same level of image quality as a DDIM baseline while reducing the sampling time by approximately 32-58%. Compared with other sampling methods using the same sampling time, our approach provides better image quality as measured by LPIPS, FID, and Perception/Recall. With only minimal modifications to the sampling equation, we also show successful acceleration on various conditional generation tasks.
2 BACKGROUND
This section provides a high-level summary of the theoretical foundation of diffusion models as well as numerical methods that have been used for diffusion models. Here we briefly explain a few that contribute to our method.
2.1 DIFFUSION MODELS
Assuming that x0 is a random variable from the data distribution we wish to reproduce, diffusion models define a sequence of Gaussian noise degradation of x0 as random variables x1, x2, ..., xT , where xt ∼ N ( √ 1− βtxt−1, βtI) and βt ∈ [0, 1] are parameters that control the noise levels. With a property of Gaussian distribution, we can express xt directly as a function of x0 and noise ϵ ∼ N (0, I) by xt = √ ᾱtx0+ √ 1− ᾱtϵ, where ᾱt = ∏t i=1(1−βi). By picking a sufficiently large T (e.g., 1,000) and an appropriate set of βt, we can assume xT is a standard Gaussian distribution. The main idea of diffusion model generation is to sample a Gaussian noise xT and use it to reversely sample xT−1, xT−2, ... until we obtain x0, which belongs to our data distribution.
Ho et al. (2020) propose Denoising Diffusion Probabilistic Model (DDPM) and explain how to employ a neural network ϵθ(xt, t) to predict the noise ϵ that is used to compute xt. To train the network, we sample a training image x0, t, and ϵ to compute xt using the above relationship. Then, we optimize our network ϵθ to minimize the difference between the predicted and real noise, i.e., ∥ϵ− ϵθ(xt, t)∥2.
Song et al. (2020a) introduce Denoising Diffusion Implicit Model (DDIM), which uses the network ϵθ to deterministically obtain xt−1 given xt. The DDIM generative process can be written as
xt−1 = √ ᾱt−1 ᾱt ( xt − √ 1− ᾱtϵθ(xt, t) ) + √ 1− ᾱt−1ϵθ(xt, t). (1)
This formulation could be used to skip many sampling steps and boost sampling speed. To turn this into an ODE, we rewrite Equation 1 as:
xt−∆t√ ᾱt−∆t = xt√ ᾱt + (√ 1− ᾱt−∆t ᾱt−∆t − √ 1− ᾱt ᾱt ) ϵθ(xt, t), (2)
which is now equivalent to a numerical step in solving an ODE. To derive the corresponding ODE, we can re-parameterize σt = √ 1− ᾱt/ √ ᾱt, x̄(t) = xt/ √ ᾱt and ϵ̄σ(x̄) = ϵθ(xt, t), yielding x̄(t−∆t)− x̄(t) = (σt−∆t − σt)ϵ̄σ(x̄). By letting (σt−∆t − σt) → 0, the ODE becomes:
dx̄ dσ = ϵ̄σ(x̄). (3)
Note that this change of variables is equivalent to an exponential integrator technique described in both Zhang & Chen (2022) and Lu et al. (2022). Since xt and x̄(t) have the same value at t = 0, our work can focus on solving x̄(t) rather than xt. Many numerical methods can be applied to the ODE Equation 3 to accelerate diffusion sampling. We next discuss some of them that are relevant.
2.2 NUMERICAL METHODS
Euler’s Method is the most basic numerical method. A forward Euler step is given by x̄n+1 = x̄n + ∆σϵ̄σ(x̄n). When the forward Euler step is applied to the ODE Equation 3, we obtain the DDIM formulation (Song et al., 2020a).
Heun’s Method, also known as the trapezoid rule or improved Euler, is given by: x̄n+1 = x̄n + ∆σ 2 (e1+ e2), where e1 = ϵ̄σ(x̄n) and e2 = ϵ̄σ(x̄n+∆σe1). This method splits Euler’s method into two steps to improve accuracy. Many papers have used this method on diffusion models, including Algorithm 1 in Karras et al. (2022) and DPM-Solver-2 in Lu et al. (2022). This method is also the simplest case of Predictor-Corrector methods used in Song et al. (2020b).
Runge-Kutta Methods represent a class of numerical methods that integrate information from multiple hidden steps and provide high accuracy results. Heun’s method also belongs to a family of 2nd-order Runge-Kutta methods (RK2). The most well-known variant is the 4th-order Runge-Kutta method (RK4), which is written as follows:
e1 = ϵ̄σ(x̄n), e2 = ϵ̄σ ( x̄n + ∆σ
2 e1
) , e3 = ϵ̄σ ( x̄n + ∆σ
2 e2
) , e4 = ϵ̄σ (x̄n +∆σe3) ,
x̄n+1 = x̄n + ∆σ
6 (e1 + 2e2 + 2e3 + e4). (4)
This method has been tested on diffusion models in Liu et al. (2022) and Salimans & Ho (2022), but it has not been used as the main proposed method in any paper.
Linear Multi-Step Method, similar to the Runge-Kutta methods, aims to combine information from several steps. However, rather than evaluating new hidden steps, this method uses the previous steps to estimate the new step. The 1st-order formulation is the same as Euler’s method. The 2nd-order formulation is given by
x̄n+1 = x̄n + ∆σ
2 (3e0 − e1) , (5)
while the 4th-order formulation is given by
x̄n+1 = x̄n + ∆σ
24 (55e0 − 59e1 + 37e2 − 9e3), (6)
where ek = ϵ̄σ(x̄n−k). These formulations are designed for a constant ∆σ in each step. However, our experiments and previous work that uses this method (e.g., Liu et al. (2022); Zhang & Chen
(2022)) still show good results when this assumption is not strictly satisfied, i.e., when ∆σ is not constant. We will refer to these formulations as PLMS (Pseudo Linear Multi-Step) for the rest of the paper, like in Liu et al. (2022). A similar linear multi-step method for non-constant ∆σ can also be derived using a technique used in Zhang & Chen (2022), which we detail in Appendix B. This non-constant version can improve upon PLMS slightly, but it is not as flexible because we have to re-derive the update rule every time the σ schedule changes.
3 SPLITTING METHODS FOR GUIDED DIFFUSION MODELS
This section introduces our technique that uses splitting numerical methods to accelerate guided diffusion sampling. We first focus our investigation on classifier-guided diffusion models for classconditional generation and later demonstrate how this technique can be used for other conditional generation tasks in Section 4.3. Like any guided diffusion models, classifier-guided models (Dhariwal & Nichol, 2021) share the same training objective with regular unguided models with no modifications to the training procedure; but the sampling process is guided by an additional gradient signal from an external classifier to generate class-specific output images. Specifically, the sampling process is given by
ϵ̂ = ϵθ(xt)− √ 1− ᾱt∇x log pϕ(c|xt), xt−1 = √ ᾱt−1
( xt −
√ 1− ᾱtϵ̂√ ᾱt
) + √ 1− ᾱt−1ϵ̂, (7)
where pϕ(c|xt) is a classifier model trained to output the probability of xt belonging to class c. As discussed in the previous section, we can rewrite this formulation as a “guided ODE”:
dx̄ dσ = ϵ̄σ(x̄)−∇fσ(x̄), (8)
where fσ(x̄) = σ√σ2+1 log pϕ(c|xt). We refer to fσ as the conditional function, which can be substituted with other functions for different tasks. After obtaining the ODE form, any numerical solver mentioned earlier can be readily applied to accelerate the sampling process. However, we observe that classical high-order numerical methods (e.g., PLMS4, RK4) fail to accelerate this task (see Figure 1) and even perform worse than the baseline DDIM.
We hypothesize that the two terms in the guided ODE may have different numerical behaviors with the conditional term being less suitable to classical high-order methods. We speculate that the difference could be partly attributed to how they are computed: ∇fσ(x̄) is computed through backpropagation, whereas ϵ̄σ(x̄) is computed directly by evaluating a network. One possible solution to handle terms with different behaviors is the so-called operator splitting method, which divides the problem into two subproblems:
dy dσ = ϵ̄σ(y), dz dσ = −∇fσ(z). (9)
We call these the diffusion and condition subproblems, respectively. This method allows separating the hard-to-approximate ∇fσ(z) from ϵ̄σ(y) and solving them separately in each time step. Importantly, this helps reintroduce the effective use of high-order methods on the diffusion subproblem as well as provides us with options to combine different specialized methods to maximize performance. We explore two most famous first- and second-order splitting techniques for our task:
3.1 LIE-TROTTER SPLITTING (LTSP)
Our first example is the simple first-order Lie-Trotter splitting method (Trotter, 1959), which expresses the splitting as
dy dσ = ϵ̄σ(y), y(σn) = x̄n, σ ∈ [σn+1, σn] (10)
dz dσ = −∇fσ(z), z(σn) = y(σn+1), σ ∈ [σn+1, σn] (11)
with the solution of this step being x̄n+1 = z(σn+1). Note that σn is a decreasing sequence. Here Equation 10 is the same as Equation 3, which can be solved using any high-order numerical method,
Algorithm 1: Lie-Trotter Splitting (LTSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
yn+1 = PLMS(x̄n, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1− (σn+1−σn)∇f(yn+1) ;
end Result: x̄N
Algorithm 2: Strang Splitting (STSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
zn+1 = x̄n − (σn+1−σn)2 ∇f(x̄n) ; yn+1 = PLMS(zn+1, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1 − (σn+1−σn)2 ∇f(yn+1) ;
end Result: x̄N
e.g., PLMS. For Equation 11, we can use a forward Euler step: zn+1 = zn −∆σ∇fσ(zn). (12)
This is equivalent to a single iteration of standard gradient descent with a learning rate ∆σ. This splitting scheme is summarized by Algorithm 1. We investigate different numerical methods for each subproblem in Section 4.1.
3.2 STRANG SPLITTING (STSP)
Strang splitting (or Strang-Marchuk) (Strang, 1968) is one of the most famous and widely used operator splitting methods. This second-order splitting works as follows:
dz dσ = −∇fσ(z), z(σn) = x̄n, σ ∈
[ 1
2 (σn + σn+1), σn
] (13)
dy dσ = ϵ̄σ(y), y(σn) = z
( 1
2 (σn + σn+1)
) , σ ∈ [σn+1, σn] (14)
dz̃ dσ = −∇fσ(z̃), z̃
( 1
2 (σn + σn+1)
) = y(σn+1), σ ∈ [ σn+1, 1
2 (σn + σn+1)
] (15)
Instead of solving each subproblem for a full step length, we solve the condition subproblem for half a step before and after solving the diffusion subproblem for a full step. In theory, we can swap the order of operations without affecting convergence, but it is practically cheaper to compute the condition term twice rather than the diffusion term twice because fσ is typically a smaller network compared to ϵ̄σ . The Strange splitting algorithm is shown in Algorithm 2. This method can be shown to have better accuracy than the Lie-Trotter method, as proven in Appendix N. Although it requires evaluating the condition term twice per step in exchange for improved image quality. We assess this trade-off in the experiment section.
4 EXPERIMENTS
Extending on our observation that classical high-order methods failed on guided sampling, we conducted a series of experiments to investigate this problem and evaluate our solution. Section 4.1 uses a simple splitting method (first-order LTSP) to study the effects that high-order methods have on each subproblem, leading to our key finding that only the conditional subproblem is less suited to classical high-order methods. This section also determines the best combination of numerical methods for the two subproblems under LTSP splitting. Section 4.2 explores improvements from using a higher-order splitting method and compares our best scheme to previous work. Finally, Section 4.3 applies our approach to a variety of conditional generation tasks with minimal changes.
For our comparison, we use pre-trained state-of-the-art diffusion models and classifiers from Dhariwal & Nichol (2021), which were trained on the ImageNet dataset (Russakovsky et al., 2015) with 1,000 total sampling steps. We treat full-path samples from a classifier-guided DDIM at 1,000 steps as reference solutions. Then, the performance of each configuration is measured by the image similarity between its generated samples using fewer steps and the reference DDIM samples, both starting from the same initial noise map. Given the same sampling time, we expect configurations with better performance to better match the full DDIM. We measure image similarity using Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) (lower is better) and measure sampling time on a single NVIDIA RTX 3090 and a 24-core AMD Threadripper 3960x.
4.1 FINDING A SUITABLE NUMERICAL METHOD FOR EACH SUBPROBLEM
To study the effects of different numerical methods on each subproblem of the guided ODE (Equation 8), we use the simplest Lie-Trotter splitting, which itself requires no additional network evaluations. This controlled experiment has two setups: a) we fix the numerical method for the condition subproblem (Equation 11) to first-order PLMS1 (Euler’s method) and vary the numerical method for the diffusion subproblem (Equation 10), and conversely b) we fix the method for the diffusion subproblem and vary the method for the condition subproblem. The numerical method options are Euler’s method (PLMS1), Heun’s method (RK2), 4th order Runge-Kutta’s method (RK4), and 2nd/4th order pseudo linear multi-step (PLMS2/PLMS4). We report LPIPS vs. sampling time of various numerical combinations on a diffusion model trained on ImageNet 256×256 in Figure 2. The red dotted lines indicate a reference DDIM score obtained from 250 sampling steps, a common choice that produces good samples that are perceptually close to those from a full 1,000-step DDIM (Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021).
Given a long sampling time, non-split PLMS4 performs better than the DDIM baseline. However, when the sampling time is reduced, the image quality of PLMS4 rapidly decreases and becomes much worse than that of DDIM, especially under 15 seconds in Figure 2. When we split the ODE and solve both subproblems using first-order PLMS1 (Euler), the performance is close to that of DDIM, which is also considered first-order but without any splitting. This helps verify that merely splitting the ODE does not significantly alter the sampling speed.
In the setup a), when RK2 and RK4 are used for the diffusion subproblem, they also perform worse than the DDIM baseline. This slowdown is caused by the additional evaluations of the network by these methods, which outweigh the improvement gained in each longer diffusion step. Note that if we instead measure the image quality with respect to the number of diffusion steps, RK2 and RK4 can outperform other methods (Appendix E); however, this is not our metric of interest. On the other hand, PLMS2 and PLMS4, which require no additional network evaluations, are about 8-10% faster than DDIM and can achieve the same LPIPS score as the DDIM that uses 250 sampling steps in 20-26 fewer steps. Importantly, when the sampling time is reduced, their performance does not degrade rapidly like the non-split PLMS4 and remains at the same level as DDIM.
In the setup b) where we vary the numerical method for the condition subproblem, the result reveals an interesting contrast—none of the methods beats DDIM and some even make the sampling diverged [PLMS1, RK4]. These findings suggest that the gradients of conditional functions are less “compatible” with classical high-order methods, especially when used with a small number of steps. This phenomenon may be related to the “stiffness” condition of ODEs, which we discuss further in Section 5. For the remainder of our experiments, we will use the combination [PLMS4, PLMS1] for the diffusion and condition subproblems, respectively.
Sampling time within 5 sec. 10 sec. 15 sec. 20 sec.
DDIM 0.116 0.062 0.043 0.033 PLMS4 0.278 0.141 0.057 0.026 RK2 0.193 0.059 0.036 0.028 RK4 0.216 0.054 0.039 0.028 LTSP4 0.121 0.058 0.037 0.028 STSP4 0.079 0.035 0.022 0.013
Table 1: Average LPIPS when the sampling time is limited to be under 5 - 20 seconds.
4.2 IMPROVED SPLITTING METHOD
This experiment investigates improvements from using a high-order splitting method, specifically the Strang splitting method, with the numerical combination [PLMS4, PLMS1] and compares our methods to previous work. Note that besides DDIM Dhariwal & Nichol (2021), no previous work is specifically designed for accelerating guided sampling, thus the baselines in this comparison are only adaptations of the core numerical methods used in those papers. And to our knowledge, no prior guided-diffusion work uses splitting numerical methods. Non-split numerical method baselines are PLMS4, which is used in Liu et al. (2022), RK2, which is used in Karras et al. (2022); Lu et al. (2022), and higher-order RK4. We report the LPIPS scores of these methods with respect to the sampling time in Figure 3 and Table 1.
Without any splitting, PLMS4, RK2 and RK4 show significantly poorer image quality when used with short sampling times < 10 seconds. The best performer is our Strang splitting (STSP4), which can reach the same quality as 250-step DDIM while using 32-58% less sampling time. STSP4 also obtains the highest LPIPS scores for sample times of 5, 10, 15, and 20 seconds. More statistical details and comparison with other split combinations are in Appendix F, G.
In addition, we perform a quantitative evaluation for class-conditional generation by sampling 50,000 images based on uniformly chosen class conditions with a small number of sampling steps and evaluating the Fenchel Inception Distance (FID) Heusel et al. (2017) (lower is better) and the improved precision/recall Kynkäänniemi et al. (2019) (higher is better) against the ImageNet test set at 128, 256, and 512 resolutions. Following (Dhariwal & Nichol, 2021), we use a 25-step DDIM as a baseline, which already produces visually reasonable results. As PLMS and LTSP require the same number of network evaluations as the DDIM, they are used also with 25 steps. For STSP with a slower evaluation time, it is only allowed 20 steps, which is the highest number of steps such that its sampling time is within that of the baseline 25-step DDIM. Here LTSP2 and STSP2 are Lie-Trotter and Strang splitting methods with the combination [PLMS2, PLMS1]. In Table 2, we report the results for three different ImageNet resolutions and the average sampling time per image in seconds.
Our STSP4 performs best on all measurements except Recall on ImageNet512. On ImageNet512, PLMS4 has the highest Recall score but a poor FID of 16, indicating that the generated images have good distribution coverage but may poorly represent the real distribution. On ImageNet256, STSP4 can yield 4.49 FID in 20 steps, compared to 4.59 FID in 250 steps originally reported in the paper (Dhariwal & Nichol, 2021). Our STSP4 is about 9.4× faster when tested on the same machine.
4.3 SPLITTING METHODS IN OTHER TASKS
Besides class-conditional generation, our approach can also accelerate any conditional image generation as long as the gradient of the conditional function can be defined. We test our approach on four tasks: text-to-image generation, image inpainting, colorization, and super-resolution.
Text-to-image generation: We use a pre-trained text-to-image Disco-Diffusion (Letts et al., 2021) based on Crowson (2021), which substitutes the classifier output with the dot product of the image and caption encodings from CLIP (Radford et al., 2021). For more related experiments on StableDiffusion (Rombach et al., 2022), please refer to Appendix L, M.
Image inpainting & colorization: For these two tasks, we follow the techniques proposed in Song et al. (2020b) and Chung et al. (2022a), which improves the conditional functions of both tasks with “manifold constraints.” We use the same diffusion model Dhariwal & Nichol (2021) trained on ImageNet as our earlier Experiments 4.1, 4.2.
Super-resolution: We follow the formulation from ILVR (Choi et al., 2021) combined with the manifold contraints Chung et al. (2022a), and also use our earlier ImageNet diffusion model.
Figure 4 compares our techniques, LTSP4 and STSP4, with the DDIM baseline and PLMS4 on text-to-image generation. Each result is produced using a fixed sampling time of about 26 seconds. STSP4, which uses 30 diffusion steps compared to 45 in the other methods, produces more realistic results with color contrast that is more similar to the full DDIM references’. Figure 5 shows that our STSP4 produces more convincing results than the DDIM baseline with fewer artifacts on the other three tasks while using the same 5 second sampling time. Implementation details, quantitative evaluations, and more results are in Appendix J, K.
5 DISCUSSION
Our findings show that when the sampling ODE consists of multiple terms from different networks, their numerical behaviors can be different and treating them separately can be more optimal. Another promising direction is to improve the behavior of the gradient of the conditional function / classifier itself and study whether related properties such as adversarial robustness or gradient smoothness can induce the desirable temporal smoothness in the sampling ODE. However, it is not yet clear what specific characteristics of the behavior play an important role. This challenge may be related to a
condition called “stiffness” in solving ODEs Ernst & Gerhard (2010), which lacks a clear definition but describes the situation where explicit numerical methods, such as RK or PLMS, require a very small step size even in regions with smooth curvature.
As an alternative to the classifier-guided model, Ho & Salimans (2021) propose a classifier-free model that can perform conditional generation without a classifier while remaining a generative model. This model can utilize high-order methods as no classifier is involved, but it requires evaluating the classifier-free network twice per step, which is typically more expensive than evaluating a normal diffusion model and a classifier. It is important to note that our accelerating technique and classifier-free models are not mutually exclusive, and one can still apply a conditional function and our splitting technique to guide a classifier-free model in a direction it has not been trained for.
While our paper only focuses on ODEs derived from the deterministic sampling of DDIM, one can convert SDE-based diffusion models to ODEs (Karras et al., 2022) and still use our technique. More broadly, we can accelerate any diffusion model that can be expressed as a differential equation with a summation of two terms. When these terms behave differently, the benefit from splitting can be substantial. Nevertheless, our findings are based on common, existing models and σ schedule from Dhariwal & Nichol (2021). Further investigation into the impact of the σ schedule or different types and architectures of diffusion models is still required.
6 CONCLUSION
In this paper, we investigate the failure to accelerate guided diffusion sampling of classical highorder numerical methods and propose a solution based on splitting numerical methods. We found that the gradients of conditional functions are less suitable to classical high-order numerical methods and design a technique based on Strang splitting and a combination of forth- and first-order numerical methods. Our method achieves better LPIPS and FID scores than previous work given the same sampling time and is 32-58% faster than a 250-step DDIM baseline. Our technique can successfully accelerate a variety of tasks, such as text-to-image generation, inpainting, colorization, and super-resolution. | 1. What is the focus of the paper regarding guided diffusion sampling?
2. What are the strengths of the proposed approach, particularly in applying existing splitting methods?
3. What are the weaknesses of the paper, especially regarding the unclear theory behind non-splitting high-order methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any questions or suggestions regarding the experimental results and their presentation? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper focuses on accelerating guided diffusion sampling, where the backward ODE to be solved consists of two parts: the first one from the diffusion contribution and the second one from the classifier contribution. It is empirically found that non-splitting high-order ODE solvers does not work well when the number of time steps is small. To address the above issue, the authors consider applying splitting methods (LTSP 1959 and Strange splitting 1968) for solving the backward ODE, where the two parts in the ODE are treated separately and sequentially.
Strengths And Weaknesses
Strength:
The paper, for the first time, applies existing splitting methods (LTSP 1959 and Strange splitting 1968) for solving the backward ODE in guided diffusion sampling and obtain promising sampling results, which I think is interesting and novel.
The literature over existing high order methods seems up-to-date.
Weaknesses: It is unclear from a theoretical point of view why non-splitting high-order methods do not work well. There may exist other high-order methods that work which yet to be discovered.
Clarity, Quality, Novelty And Reproducibility
Overall, the paper is easy to follow. The newly designed sampling methods LTSP and STSP sound reasonable, where the ODE term from the classifier contribution is solved by a first order method and the ODE term from the diffusion contribution is solved by a high-order method.
There are a few inconsistent statements:
On page 4 of experiments, it says DDIM at 1000 steps are taken as reference solutions while in Figure 2 and 3 later on, DDIM of 250 steps are considered.
On page 6, it says condition subproblem (Equation 10)... diffusion subproblem (Equation 11), which I believe is a mistake.
I think it is more convincing to replace LPIPS in Figure 2 and 3 with FID, which most readers are more familiar with. |
ICLR | Title
Accelerating Guided Diffusion Sampling with Splitting Numerical Methods
Abstract
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. However, one drawback of diffusion models, whether they are guided or unguided, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical highorder numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution.
1 INTRODUCTION
A family of generative models known as diffusion models has recently gained a lot of attention with state-of-the-art image generation quality (Dhariwal & Nichol, 2021). Guided diffusion is an approach for controlling the output of a trained diffusion model for conditional generation tasks without retraining its network. By engineering a task-specific conditional function and modifying only the sampling procedure, guided diffusion models can be used in a variety of applications, such as class-conditional image generation (Dhariwal & Nichol, 2021; Kawar et al., 2022), text-to-image generation (Nichol et al., 2022), image-to-image translation (Zhao et al., 2022), inpainting (Chung et al., 2022a), colorization (Song et al., 2020b), image composition (Sasaki et al., 2021), adversarial purification (Wang et al., 2022; Wu et al., 2022) and super-resolution (Choi et al., 2021).
One common drawback of both guided and regular “unguided” diffusion models is their slow sampling processes, usually requiring hundreds of iterations to produce a single image. Recent speedup attempts include improving the noise schedule (Nichol & Dhariwal, 2021; Watson et al., 2021), redefining the diffusion process to be non-Markovian, thereby allowing a deterministic sampling process Song et al. (2020a), network distillation that teaches a student model to simulate multiple sampling steps of a teacher model Salimans & Ho (2022); Luhman & Luhman (2021), among others. Song et al. (2020a) show how each sampling step can be expressed as a first-order numerical step of an ordinary differential equation (ODE). Similarly, Song et al. (2020b) express the sampling of a score-based model as solving a stochastic differential equation (SDE). By regarding the sampling process as an ODE/SDE, many high-order numerical methods have been suggested, such as Liu et al. (2022), Zhang & Chen (2022), and Zhang et al. (2022) with impressive results on unguided diffusion models. However, when applied to guided diffusion models, these methods produce surprisingly poor results (see Figure 1)—given a few number of steps, those high-order numerical methods actually perform worse than low-order methods.
Guided sampling differs from the unguided one by the addition of the gradients of the conditional function to its sampling equation. The observed performance decline thus suggests that classical high-order methods may not be suitable for the conditional function and, consequently, the guided
sampling equation as a whole. Our paper tests this hypothesis and presents an approach to accelerating guided diffusion sampling. The key idea is to use an operator splitting method to split the less well-behaved conditional function term from the standard diffusion term and solve them separately. This approach not only allows re-utilizing the successful high-order methods on the diffusion term but also provides us with options to combine different specialized methods for each term to maximize performance. Note that splitting methods have also been explored by Dockhorn et al. (2022) to solve unguided diffusion SDEs, but our work focuses on accelerating guided diffusion ODEs.
Our design process includes comparing different splitting methods and numerical methods for each split term. When tested on ImageNet, our approach achieves the same level of image quality as a DDIM baseline while reducing the sampling time by approximately 32-58%. Compared with other sampling methods using the same sampling time, our approach provides better image quality as measured by LPIPS, FID, and Perception/Recall. With only minimal modifications to the sampling equation, we also show successful acceleration on various conditional generation tasks.
2 BACKGROUND
This section provides a high-level summary of the theoretical foundation of diffusion models as well as numerical methods that have been used for diffusion models. Here we briefly explain a few that contribute to our method.
2.1 DIFFUSION MODELS
Assuming that x0 is a random variable from the data distribution we wish to reproduce, diffusion models define a sequence of Gaussian noise degradation of x0 as random variables x1, x2, ..., xT , where xt ∼ N ( √ 1− βtxt−1, βtI) and βt ∈ [0, 1] are parameters that control the noise levels. With a property of Gaussian distribution, we can express xt directly as a function of x0 and noise ϵ ∼ N (0, I) by xt = √ ᾱtx0+ √ 1− ᾱtϵ, where ᾱt = ∏t i=1(1−βi). By picking a sufficiently large T (e.g., 1,000) and an appropriate set of βt, we can assume xT is a standard Gaussian distribution. The main idea of diffusion model generation is to sample a Gaussian noise xT and use it to reversely sample xT−1, xT−2, ... until we obtain x0, which belongs to our data distribution.
Ho et al. (2020) propose Denoising Diffusion Probabilistic Model (DDPM) and explain how to employ a neural network ϵθ(xt, t) to predict the noise ϵ that is used to compute xt. To train the network, we sample a training image x0, t, and ϵ to compute xt using the above relationship. Then, we optimize our network ϵθ to minimize the difference between the predicted and real noise, i.e., ∥ϵ− ϵθ(xt, t)∥2.
Song et al. (2020a) introduce Denoising Diffusion Implicit Model (DDIM), which uses the network ϵθ to deterministically obtain xt−1 given xt. The DDIM generative process can be written as
xt−1 = √ ᾱt−1 ᾱt ( xt − √ 1− ᾱtϵθ(xt, t) ) + √ 1− ᾱt−1ϵθ(xt, t). (1)
This formulation could be used to skip many sampling steps and boost sampling speed. To turn this into an ODE, we rewrite Equation 1 as:
xt−∆t√ ᾱt−∆t = xt√ ᾱt + (√ 1− ᾱt−∆t ᾱt−∆t − √ 1− ᾱt ᾱt ) ϵθ(xt, t), (2)
which is now equivalent to a numerical step in solving an ODE. To derive the corresponding ODE, we can re-parameterize σt = √ 1− ᾱt/ √ ᾱt, x̄(t) = xt/ √ ᾱt and ϵ̄σ(x̄) = ϵθ(xt, t), yielding x̄(t−∆t)− x̄(t) = (σt−∆t − σt)ϵ̄σ(x̄). By letting (σt−∆t − σt) → 0, the ODE becomes:
dx̄ dσ = ϵ̄σ(x̄). (3)
Note that this change of variables is equivalent to an exponential integrator technique described in both Zhang & Chen (2022) and Lu et al. (2022). Since xt and x̄(t) have the same value at t = 0, our work can focus on solving x̄(t) rather than xt. Many numerical methods can be applied to the ODE Equation 3 to accelerate diffusion sampling. We next discuss some of them that are relevant.
2.2 NUMERICAL METHODS
Euler’s Method is the most basic numerical method. A forward Euler step is given by x̄n+1 = x̄n + ∆σϵ̄σ(x̄n). When the forward Euler step is applied to the ODE Equation 3, we obtain the DDIM formulation (Song et al., 2020a).
Heun’s Method, also known as the trapezoid rule or improved Euler, is given by: x̄n+1 = x̄n + ∆σ 2 (e1+ e2), where e1 = ϵ̄σ(x̄n) and e2 = ϵ̄σ(x̄n+∆σe1). This method splits Euler’s method into two steps to improve accuracy. Many papers have used this method on diffusion models, including Algorithm 1 in Karras et al. (2022) and DPM-Solver-2 in Lu et al. (2022). This method is also the simplest case of Predictor-Corrector methods used in Song et al. (2020b).
Runge-Kutta Methods represent a class of numerical methods that integrate information from multiple hidden steps and provide high accuracy results. Heun’s method also belongs to a family of 2nd-order Runge-Kutta methods (RK2). The most well-known variant is the 4th-order Runge-Kutta method (RK4), which is written as follows:
e1 = ϵ̄σ(x̄n), e2 = ϵ̄σ ( x̄n + ∆σ
2 e1
) , e3 = ϵ̄σ ( x̄n + ∆σ
2 e2
) , e4 = ϵ̄σ (x̄n +∆σe3) ,
x̄n+1 = x̄n + ∆σ
6 (e1 + 2e2 + 2e3 + e4). (4)
This method has been tested on diffusion models in Liu et al. (2022) and Salimans & Ho (2022), but it has not been used as the main proposed method in any paper.
Linear Multi-Step Method, similar to the Runge-Kutta methods, aims to combine information from several steps. However, rather than evaluating new hidden steps, this method uses the previous steps to estimate the new step. The 1st-order formulation is the same as Euler’s method. The 2nd-order formulation is given by
x̄n+1 = x̄n + ∆σ
2 (3e0 − e1) , (5)
while the 4th-order formulation is given by
x̄n+1 = x̄n + ∆σ
24 (55e0 − 59e1 + 37e2 − 9e3), (6)
where ek = ϵ̄σ(x̄n−k). These formulations are designed for a constant ∆σ in each step. However, our experiments and previous work that uses this method (e.g., Liu et al. (2022); Zhang & Chen
(2022)) still show good results when this assumption is not strictly satisfied, i.e., when ∆σ is not constant. We will refer to these formulations as PLMS (Pseudo Linear Multi-Step) for the rest of the paper, like in Liu et al. (2022). A similar linear multi-step method for non-constant ∆σ can also be derived using a technique used in Zhang & Chen (2022), which we detail in Appendix B. This non-constant version can improve upon PLMS slightly, but it is not as flexible because we have to re-derive the update rule every time the σ schedule changes.
3 SPLITTING METHODS FOR GUIDED DIFFUSION MODELS
This section introduces our technique that uses splitting numerical methods to accelerate guided diffusion sampling. We first focus our investigation on classifier-guided diffusion models for classconditional generation and later demonstrate how this technique can be used for other conditional generation tasks in Section 4.3. Like any guided diffusion models, classifier-guided models (Dhariwal & Nichol, 2021) share the same training objective with regular unguided models with no modifications to the training procedure; but the sampling process is guided by an additional gradient signal from an external classifier to generate class-specific output images. Specifically, the sampling process is given by
ϵ̂ = ϵθ(xt)− √ 1− ᾱt∇x log pϕ(c|xt), xt−1 = √ ᾱt−1
( xt −
√ 1− ᾱtϵ̂√ ᾱt
) + √ 1− ᾱt−1ϵ̂, (7)
where pϕ(c|xt) is a classifier model trained to output the probability of xt belonging to class c. As discussed in the previous section, we can rewrite this formulation as a “guided ODE”:
dx̄ dσ = ϵ̄σ(x̄)−∇fσ(x̄), (8)
where fσ(x̄) = σ√σ2+1 log pϕ(c|xt). We refer to fσ as the conditional function, which can be substituted with other functions for different tasks. After obtaining the ODE form, any numerical solver mentioned earlier can be readily applied to accelerate the sampling process. However, we observe that classical high-order numerical methods (e.g., PLMS4, RK4) fail to accelerate this task (see Figure 1) and even perform worse than the baseline DDIM.
We hypothesize that the two terms in the guided ODE may have different numerical behaviors with the conditional term being less suitable to classical high-order methods. We speculate that the difference could be partly attributed to how they are computed: ∇fσ(x̄) is computed through backpropagation, whereas ϵ̄σ(x̄) is computed directly by evaluating a network. One possible solution to handle terms with different behaviors is the so-called operator splitting method, which divides the problem into two subproblems:
dy dσ = ϵ̄σ(y), dz dσ = −∇fσ(z). (9)
We call these the diffusion and condition subproblems, respectively. This method allows separating the hard-to-approximate ∇fσ(z) from ϵ̄σ(y) and solving them separately in each time step. Importantly, this helps reintroduce the effective use of high-order methods on the diffusion subproblem as well as provides us with options to combine different specialized methods to maximize performance. We explore two most famous first- and second-order splitting techniques for our task:
3.1 LIE-TROTTER SPLITTING (LTSP)
Our first example is the simple first-order Lie-Trotter splitting method (Trotter, 1959), which expresses the splitting as
dy dσ = ϵ̄σ(y), y(σn) = x̄n, σ ∈ [σn+1, σn] (10)
dz dσ = −∇fσ(z), z(σn) = y(σn+1), σ ∈ [σn+1, σn] (11)
with the solution of this step being x̄n+1 = z(σn+1). Note that σn is a decreasing sequence. Here Equation 10 is the same as Equation 3, which can be solved using any high-order numerical method,
Algorithm 1: Lie-Trotter Splitting (LTSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
yn+1 = PLMS(x̄n, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1− (σn+1−σn)∇f(yn+1) ;
end Result: x̄N
Algorithm 2: Strang Splitting (STSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
zn+1 = x̄n − (σn+1−σn)2 ∇f(x̄n) ; yn+1 = PLMS(zn+1, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1 − (σn+1−σn)2 ∇f(yn+1) ;
end Result: x̄N
e.g., PLMS. For Equation 11, we can use a forward Euler step: zn+1 = zn −∆σ∇fσ(zn). (12)
This is equivalent to a single iteration of standard gradient descent with a learning rate ∆σ. This splitting scheme is summarized by Algorithm 1. We investigate different numerical methods for each subproblem in Section 4.1.
3.2 STRANG SPLITTING (STSP)
Strang splitting (or Strang-Marchuk) (Strang, 1968) is one of the most famous and widely used operator splitting methods. This second-order splitting works as follows:
dz dσ = −∇fσ(z), z(σn) = x̄n, σ ∈
[ 1
2 (σn + σn+1), σn
] (13)
dy dσ = ϵ̄σ(y), y(σn) = z
( 1
2 (σn + σn+1)
) , σ ∈ [σn+1, σn] (14)
dz̃ dσ = −∇fσ(z̃), z̃
( 1
2 (σn + σn+1)
) = y(σn+1), σ ∈ [ σn+1, 1
2 (σn + σn+1)
] (15)
Instead of solving each subproblem for a full step length, we solve the condition subproblem for half a step before and after solving the diffusion subproblem for a full step. In theory, we can swap the order of operations without affecting convergence, but it is practically cheaper to compute the condition term twice rather than the diffusion term twice because fσ is typically a smaller network compared to ϵ̄σ . The Strange splitting algorithm is shown in Algorithm 2. This method can be shown to have better accuracy than the Lie-Trotter method, as proven in Appendix N. Although it requires evaluating the condition term twice per step in exchange for improved image quality. We assess this trade-off in the experiment section.
4 EXPERIMENTS
Extending on our observation that classical high-order methods failed on guided sampling, we conducted a series of experiments to investigate this problem and evaluate our solution. Section 4.1 uses a simple splitting method (first-order LTSP) to study the effects that high-order methods have on each subproblem, leading to our key finding that only the conditional subproblem is less suited to classical high-order methods. This section also determines the best combination of numerical methods for the two subproblems under LTSP splitting. Section 4.2 explores improvements from using a higher-order splitting method and compares our best scheme to previous work. Finally, Section 4.3 applies our approach to a variety of conditional generation tasks with minimal changes.
For our comparison, we use pre-trained state-of-the-art diffusion models and classifiers from Dhariwal & Nichol (2021), which were trained on the ImageNet dataset (Russakovsky et al., 2015) with 1,000 total sampling steps. We treat full-path samples from a classifier-guided DDIM at 1,000 steps as reference solutions. Then, the performance of each configuration is measured by the image similarity between its generated samples using fewer steps and the reference DDIM samples, both starting from the same initial noise map. Given the same sampling time, we expect configurations with better performance to better match the full DDIM. We measure image similarity using Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) (lower is better) and measure sampling time on a single NVIDIA RTX 3090 and a 24-core AMD Threadripper 3960x.
4.1 FINDING A SUITABLE NUMERICAL METHOD FOR EACH SUBPROBLEM
To study the effects of different numerical methods on each subproblem of the guided ODE (Equation 8), we use the simplest Lie-Trotter splitting, which itself requires no additional network evaluations. This controlled experiment has two setups: a) we fix the numerical method for the condition subproblem (Equation 11) to first-order PLMS1 (Euler’s method) and vary the numerical method for the diffusion subproblem (Equation 10), and conversely b) we fix the method for the diffusion subproblem and vary the method for the condition subproblem. The numerical method options are Euler’s method (PLMS1), Heun’s method (RK2), 4th order Runge-Kutta’s method (RK4), and 2nd/4th order pseudo linear multi-step (PLMS2/PLMS4). We report LPIPS vs. sampling time of various numerical combinations on a diffusion model trained on ImageNet 256×256 in Figure 2. The red dotted lines indicate a reference DDIM score obtained from 250 sampling steps, a common choice that produces good samples that are perceptually close to those from a full 1,000-step DDIM (Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021).
Given a long sampling time, non-split PLMS4 performs better than the DDIM baseline. However, when the sampling time is reduced, the image quality of PLMS4 rapidly decreases and becomes much worse than that of DDIM, especially under 15 seconds in Figure 2. When we split the ODE and solve both subproblems using first-order PLMS1 (Euler), the performance is close to that of DDIM, which is also considered first-order but without any splitting. This helps verify that merely splitting the ODE does not significantly alter the sampling speed.
In the setup a), when RK2 and RK4 are used for the diffusion subproblem, they also perform worse than the DDIM baseline. This slowdown is caused by the additional evaluations of the network by these methods, which outweigh the improvement gained in each longer diffusion step. Note that if we instead measure the image quality with respect to the number of diffusion steps, RK2 and RK4 can outperform other methods (Appendix E); however, this is not our metric of interest. On the other hand, PLMS2 and PLMS4, which require no additional network evaluations, are about 8-10% faster than DDIM and can achieve the same LPIPS score as the DDIM that uses 250 sampling steps in 20-26 fewer steps. Importantly, when the sampling time is reduced, their performance does not degrade rapidly like the non-split PLMS4 and remains at the same level as DDIM.
In the setup b) where we vary the numerical method for the condition subproblem, the result reveals an interesting contrast—none of the methods beats DDIM and some even make the sampling diverged [PLMS1, RK4]. These findings suggest that the gradients of conditional functions are less “compatible” with classical high-order methods, especially when used with a small number of steps. This phenomenon may be related to the “stiffness” condition of ODEs, which we discuss further in Section 5. For the remainder of our experiments, we will use the combination [PLMS4, PLMS1] for the diffusion and condition subproblems, respectively.
Sampling time within 5 sec. 10 sec. 15 sec. 20 sec.
DDIM 0.116 0.062 0.043 0.033 PLMS4 0.278 0.141 0.057 0.026 RK2 0.193 0.059 0.036 0.028 RK4 0.216 0.054 0.039 0.028 LTSP4 0.121 0.058 0.037 0.028 STSP4 0.079 0.035 0.022 0.013
Table 1: Average LPIPS when the sampling time is limited to be under 5 - 20 seconds.
4.2 IMPROVED SPLITTING METHOD
This experiment investigates improvements from using a high-order splitting method, specifically the Strang splitting method, with the numerical combination [PLMS4, PLMS1] and compares our methods to previous work. Note that besides DDIM Dhariwal & Nichol (2021), no previous work is specifically designed for accelerating guided sampling, thus the baselines in this comparison are only adaptations of the core numerical methods used in those papers. And to our knowledge, no prior guided-diffusion work uses splitting numerical methods. Non-split numerical method baselines are PLMS4, which is used in Liu et al. (2022), RK2, which is used in Karras et al. (2022); Lu et al. (2022), and higher-order RK4. We report the LPIPS scores of these methods with respect to the sampling time in Figure 3 and Table 1.
Without any splitting, PLMS4, RK2 and RK4 show significantly poorer image quality when used with short sampling times < 10 seconds. The best performer is our Strang splitting (STSP4), which can reach the same quality as 250-step DDIM while using 32-58% less sampling time. STSP4 also obtains the highest LPIPS scores for sample times of 5, 10, 15, and 20 seconds. More statistical details and comparison with other split combinations are in Appendix F, G.
In addition, we perform a quantitative evaluation for class-conditional generation by sampling 50,000 images based on uniformly chosen class conditions with a small number of sampling steps and evaluating the Fenchel Inception Distance (FID) Heusel et al. (2017) (lower is better) and the improved precision/recall Kynkäänniemi et al. (2019) (higher is better) against the ImageNet test set at 128, 256, and 512 resolutions. Following (Dhariwal & Nichol, 2021), we use a 25-step DDIM as a baseline, which already produces visually reasonable results. As PLMS and LTSP require the same number of network evaluations as the DDIM, they are used also with 25 steps. For STSP with a slower evaluation time, it is only allowed 20 steps, which is the highest number of steps such that its sampling time is within that of the baseline 25-step DDIM. Here LTSP2 and STSP2 are Lie-Trotter and Strang splitting methods with the combination [PLMS2, PLMS1]. In Table 2, we report the results for three different ImageNet resolutions and the average sampling time per image in seconds.
Our STSP4 performs best on all measurements except Recall on ImageNet512. On ImageNet512, PLMS4 has the highest Recall score but a poor FID of 16, indicating that the generated images have good distribution coverage but may poorly represent the real distribution. On ImageNet256, STSP4 can yield 4.49 FID in 20 steps, compared to 4.59 FID in 250 steps originally reported in the paper (Dhariwal & Nichol, 2021). Our STSP4 is about 9.4× faster when tested on the same machine.
4.3 SPLITTING METHODS IN OTHER TASKS
Besides class-conditional generation, our approach can also accelerate any conditional image generation as long as the gradient of the conditional function can be defined. We test our approach on four tasks: text-to-image generation, image inpainting, colorization, and super-resolution.
Text-to-image generation: We use a pre-trained text-to-image Disco-Diffusion (Letts et al., 2021) based on Crowson (2021), which substitutes the classifier output with the dot product of the image and caption encodings from CLIP (Radford et al., 2021). For more related experiments on StableDiffusion (Rombach et al., 2022), please refer to Appendix L, M.
Image inpainting & colorization: For these two tasks, we follow the techniques proposed in Song et al. (2020b) and Chung et al. (2022a), which improves the conditional functions of both tasks with “manifold constraints.” We use the same diffusion model Dhariwal & Nichol (2021) trained on ImageNet as our earlier Experiments 4.1, 4.2.
Super-resolution: We follow the formulation from ILVR (Choi et al., 2021) combined with the manifold contraints Chung et al. (2022a), and also use our earlier ImageNet diffusion model.
Figure 4 compares our techniques, LTSP4 and STSP4, with the DDIM baseline and PLMS4 on text-to-image generation. Each result is produced using a fixed sampling time of about 26 seconds. STSP4, which uses 30 diffusion steps compared to 45 in the other methods, produces more realistic results with color contrast that is more similar to the full DDIM references’. Figure 5 shows that our STSP4 produces more convincing results than the DDIM baseline with fewer artifacts on the other three tasks while using the same 5 second sampling time. Implementation details, quantitative evaluations, and more results are in Appendix J, K.
5 DISCUSSION
Our findings show that when the sampling ODE consists of multiple terms from different networks, their numerical behaviors can be different and treating them separately can be more optimal. Another promising direction is to improve the behavior of the gradient of the conditional function / classifier itself and study whether related properties such as adversarial robustness or gradient smoothness can induce the desirable temporal smoothness in the sampling ODE. However, it is not yet clear what specific characteristics of the behavior play an important role. This challenge may be related to a
condition called “stiffness” in solving ODEs Ernst & Gerhard (2010), which lacks a clear definition but describes the situation where explicit numerical methods, such as RK or PLMS, require a very small step size even in regions with smooth curvature.
As an alternative to the classifier-guided model, Ho & Salimans (2021) propose a classifier-free model that can perform conditional generation without a classifier while remaining a generative model. This model can utilize high-order methods as no classifier is involved, but it requires evaluating the classifier-free network twice per step, which is typically more expensive than evaluating a normal diffusion model and a classifier. It is important to note that our accelerating technique and classifier-free models are not mutually exclusive, and one can still apply a conditional function and our splitting technique to guide a classifier-free model in a direction it has not been trained for.
While our paper only focuses on ODEs derived from the deterministic sampling of DDIM, one can convert SDE-based diffusion models to ODEs (Karras et al., 2022) and still use our technique. More broadly, we can accelerate any diffusion model that can be expressed as a differential equation with a summation of two terms. When these terms behave differently, the benefit from splitting can be substantial. Nevertheless, our findings are based on common, existing models and σ schedule from Dhariwal & Nichol (2021). Further investigation into the impact of the σ schedule or different types and architectures of diffusion models is still required.
6 CONCLUSION
In this paper, we investigate the failure to accelerate guided diffusion sampling of classical highorder numerical methods and propose a solution based on splitting numerical methods. We found that the gradients of conditional functions are less suitable to classical high-order numerical methods and design a technique based on Strang splitting and a combination of forth- and first-order numerical methods. Our method achieves better LPIPS and FID scores than previous work given the same sampling time and is 32-58% faster than a 250-step DDIM baseline. Our technique can successfully accelerate a variety of tasks, such as text-to-image generation, inpainting, colorization, and super-resolution. | 1. What is the focus of the paper regarding classifier-guided sampling in diffusion models?
2. What are the strengths and weaknesses of the proposed method, particularly in its application and comparisons with other works?
3. Do you have any questions regarding the theoretical analysis and convergence of the proposed numerical splitting methods?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper analyzes classifier-guided sampling of diffusion models and observes that applying higher-order methods for accelerated sampling from the model does not work well in the guidance scenario. The root cause for that is that the additional guidance term defined by the classifier makes the generative ODE harder to solve. Explicit higher-order integrators do not seem to be suitable for the resulting stiff classifier-guided ODE. Based on that observation, the work proposes to split the ODE into two separate terms, the diffusion model's score term and the classifier term. These are then solved separately with different integrators and afterwards combined, using numerical splitting methods. The paper shows improved performance of classifier-guided diffusion model sampling over some selected baselines when using a limited number of synthesis steps.
Strengths And Weaknesses
Strengths:
The idea to split the diffusion model's main score and classifier term in the generative ODE into separate parts, and solving them separately, is novel and certainly seems like a good and sensible idea.
The analyses that are performed in Figure 2 are interesting and insightful. They nicely show that the additional classifier term is the problem that makes higher-order methods break down in the classifier-guidance setting.
The experimental results on classifier-guided diffusion model sampling support the value of the proposed method.
Weaknesses:
Most state-of-the-art conditional diffusion models these days rely on classifier-free guidance for conditional sampling. Unfortunately, the proposed methodology only applies to classifier guidance. This reduces the significance of the proposed method.
The baseline comparisons are insufficient. Comparisons to the recent state-of-the-art solvers DEIS [1] and DPM solver [2] are missing. Furthermore, for the experiments on inpainting, colorization and super-resolution, only qualitative comparisons are presented. Also, only DDIM is considered there for comparison.
The paper lacks a proper theoretical analyses of the proposed methods. Convergence, order and stability of the Lie-Trotter Splitting and Strang Splitting techniques are nowhere discussed. It is insufficient to just write sentences like "This method can be proved to have better accuracy". Moreover, I believe the deep learning community is not deeply familiar with these splitting schemes, so a much more thorough discussion and analysis is needed.
The statement that "no prior diffusion work uses splitting numerical methods" is incorrect. Critically-damped Langevin Diffusion [3] uses Strang Splitting to develop a sampler for their setting (see their Section 3.3). This work is not cited or discussed at all.
[1] Zhang and Chen, Fast Sampling of Diffusion Models with Exponential Integrator, 2022
[2] Lu et al., DPM-Solver: A Fast ODE Solver for Diffusion Probabilistic Model Sampling in Around 10 Steps, 2022
[3] Dockhorn et al., Score-Based Generative Modeling with Critically-Damped Langevin Diffusion, 2021
Clarity, Quality, Novelty And Reproducibility
Clarity: The paper is easy to read and follow.
Quality: The paper is of mediocre quality. As discussed above, baseline comparisons are insufficient, some relevant work is not cited or discussed, and the mathematical analyses of the proposed integration scheme are lacking.
Novelty: The method is novel. It is a sensible approach for classifier-guided sampling, but the overall significance is somewhat limited, because the other important setting of classifier-free guidance is not covered.
Reproducibility: There are no concerns with respect to reproducibility. The submission also includes code.
(Small note: Eq. (4) has a typo. I believe it should be
e
4
on the far right) |
ICLR | Title
Accelerating Guided Diffusion Sampling with Splitting Numerical Methods
Abstract
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. However, one drawback of diffusion models, whether they are guided or unguided, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical highorder numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution.
1 INTRODUCTION
A family of generative models known as diffusion models has recently gained a lot of attention with state-of-the-art image generation quality (Dhariwal & Nichol, 2021). Guided diffusion is an approach for controlling the output of a trained diffusion model for conditional generation tasks without retraining its network. By engineering a task-specific conditional function and modifying only the sampling procedure, guided diffusion models can be used in a variety of applications, such as class-conditional image generation (Dhariwal & Nichol, 2021; Kawar et al., 2022), text-to-image generation (Nichol et al., 2022), image-to-image translation (Zhao et al., 2022), inpainting (Chung et al., 2022a), colorization (Song et al., 2020b), image composition (Sasaki et al., 2021), adversarial purification (Wang et al., 2022; Wu et al., 2022) and super-resolution (Choi et al., 2021).
One common drawback of both guided and regular “unguided” diffusion models is their slow sampling processes, usually requiring hundreds of iterations to produce a single image. Recent speedup attempts include improving the noise schedule (Nichol & Dhariwal, 2021; Watson et al., 2021), redefining the diffusion process to be non-Markovian, thereby allowing a deterministic sampling process Song et al. (2020a), network distillation that teaches a student model to simulate multiple sampling steps of a teacher model Salimans & Ho (2022); Luhman & Luhman (2021), among others. Song et al. (2020a) show how each sampling step can be expressed as a first-order numerical step of an ordinary differential equation (ODE). Similarly, Song et al. (2020b) express the sampling of a score-based model as solving a stochastic differential equation (SDE). By regarding the sampling process as an ODE/SDE, many high-order numerical methods have been suggested, such as Liu et al. (2022), Zhang & Chen (2022), and Zhang et al. (2022) with impressive results on unguided diffusion models. However, when applied to guided diffusion models, these methods produce surprisingly poor results (see Figure 1)—given a few number of steps, those high-order numerical methods actually perform worse than low-order methods.
Guided sampling differs from the unguided one by the addition of the gradients of the conditional function to its sampling equation. The observed performance decline thus suggests that classical high-order methods may not be suitable for the conditional function and, consequently, the guided
sampling equation as a whole. Our paper tests this hypothesis and presents an approach to accelerating guided diffusion sampling. The key idea is to use an operator splitting method to split the less well-behaved conditional function term from the standard diffusion term and solve them separately. This approach not only allows re-utilizing the successful high-order methods on the diffusion term but also provides us with options to combine different specialized methods for each term to maximize performance. Note that splitting methods have also been explored by Dockhorn et al. (2022) to solve unguided diffusion SDEs, but our work focuses on accelerating guided diffusion ODEs.
Our design process includes comparing different splitting methods and numerical methods for each split term. When tested on ImageNet, our approach achieves the same level of image quality as a DDIM baseline while reducing the sampling time by approximately 32-58%. Compared with other sampling methods using the same sampling time, our approach provides better image quality as measured by LPIPS, FID, and Perception/Recall. With only minimal modifications to the sampling equation, we also show successful acceleration on various conditional generation tasks.
2 BACKGROUND
This section provides a high-level summary of the theoretical foundation of diffusion models as well as numerical methods that have been used for diffusion models. Here we briefly explain a few that contribute to our method.
2.1 DIFFUSION MODELS
Assuming that x0 is a random variable from the data distribution we wish to reproduce, diffusion models define a sequence of Gaussian noise degradation of x0 as random variables x1, x2, ..., xT , where xt ∼ N ( √ 1− βtxt−1, βtI) and βt ∈ [0, 1] are parameters that control the noise levels. With a property of Gaussian distribution, we can express xt directly as a function of x0 and noise ϵ ∼ N (0, I) by xt = √ ᾱtx0+ √ 1− ᾱtϵ, where ᾱt = ∏t i=1(1−βi). By picking a sufficiently large T (e.g., 1,000) and an appropriate set of βt, we can assume xT is a standard Gaussian distribution. The main idea of diffusion model generation is to sample a Gaussian noise xT and use it to reversely sample xT−1, xT−2, ... until we obtain x0, which belongs to our data distribution.
Ho et al. (2020) propose Denoising Diffusion Probabilistic Model (DDPM) and explain how to employ a neural network ϵθ(xt, t) to predict the noise ϵ that is used to compute xt. To train the network, we sample a training image x0, t, and ϵ to compute xt using the above relationship. Then, we optimize our network ϵθ to minimize the difference between the predicted and real noise, i.e., ∥ϵ− ϵθ(xt, t)∥2.
Song et al. (2020a) introduce Denoising Diffusion Implicit Model (DDIM), which uses the network ϵθ to deterministically obtain xt−1 given xt. The DDIM generative process can be written as
xt−1 = √ ᾱt−1 ᾱt ( xt − √ 1− ᾱtϵθ(xt, t) ) + √ 1− ᾱt−1ϵθ(xt, t). (1)
This formulation could be used to skip many sampling steps and boost sampling speed. To turn this into an ODE, we rewrite Equation 1 as:
xt−∆t√ ᾱt−∆t = xt√ ᾱt + (√ 1− ᾱt−∆t ᾱt−∆t − √ 1− ᾱt ᾱt ) ϵθ(xt, t), (2)
which is now equivalent to a numerical step in solving an ODE. To derive the corresponding ODE, we can re-parameterize σt = √ 1− ᾱt/ √ ᾱt, x̄(t) = xt/ √ ᾱt and ϵ̄σ(x̄) = ϵθ(xt, t), yielding x̄(t−∆t)− x̄(t) = (σt−∆t − σt)ϵ̄σ(x̄). By letting (σt−∆t − σt) → 0, the ODE becomes:
dx̄ dσ = ϵ̄σ(x̄). (3)
Note that this change of variables is equivalent to an exponential integrator technique described in both Zhang & Chen (2022) and Lu et al. (2022). Since xt and x̄(t) have the same value at t = 0, our work can focus on solving x̄(t) rather than xt. Many numerical methods can be applied to the ODE Equation 3 to accelerate diffusion sampling. We next discuss some of them that are relevant.
2.2 NUMERICAL METHODS
Euler’s Method is the most basic numerical method. A forward Euler step is given by x̄n+1 = x̄n + ∆σϵ̄σ(x̄n). When the forward Euler step is applied to the ODE Equation 3, we obtain the DDIM formulation (Song et al., 2020a).
Heun’s Method, also known as the trapezoid rule or improved Euler, is given by: x̄n+1 = x̄n + ∆σ 2 (e1+ e2), where e1 = ϵ̄σ(x̄n) and e2 = ϵ̄σ(x̄n+∆σe1). This method splits Euler’s method into two steps to improve accuracy. Many papers have used this method on diffusion models, including Algorithm 1 in Karras et al. (2022) and DPM-Solver-2 in Lu et al. (2022). This method is also the simplest case of Predictor-Corrector methods used in Song et al. (2020b).
Runge-Kutta Methods represent a class of numerical methods that integrate information from multiple hidden steps and provide high accuracy results. Heun’s method also belongs to a family of 2nd-order Runge-Kutta methods (RK2). The most well-known variant is the 4th-order Runge-Kutta method (RK4), which is written as follows:
e1 = ϵ̄σ(x̄n), e2 = ϵ̄σ ( x̄n + ∆σ
2 e1
) , e3 = ϵ̄σ ( x̄n + ∆σ
2 e2
) , e4 = ϵ̄σ (x̄n +∆σe3) ,
x̄n+1 = x̄n + ∆σ
6 (e1 + 2e2 + 2e3 + e4). (4)
This method has been tested on diffusion models in Liu et al. (2022) and Salimans & Ho (2022), but it has not been used as the main proposed method in any paper.
Linear Multi-Step Method, similar to the Runge-Kutta methods, aims to combine information from several steps. However, rather than evaluating new hidden steps, this method uses the previous steps to estimate the new step. The 1st-order formulation is the same as Euler’s method. The 2nd-order formulation is given by
x̄n+1 = x̄n + ∆σ
2 (3e0 − e1) , (5)
while the 4th-order formulation is given by
x̄n+1 = x̄n + ∆σ
24 (55e0 − 59e1 + 37e2 − 9e3), (6)
where ek = ϵ̄σ(x̄n−k). These formulations are designed for a constant ∆σ in each step. However, our experiments and previous work that uses this method (e.g., Liu et al. (2022); Zhang & Chen
(2022)) still show good results when this assumption is not strictly satisfied, i.e., when ∆σ is not constant. We will refer to these formulations as PLMS (Pseudo Linear Multi-Step) for the rest of the paper, like in Liu et al. (2022). A similar linear multi-step method for non-constant ∆σ can also be derived using a technique used in Zhang & Chen (2022), which we detail in Appendix B. This non-constant version can improve upon PLMS slightly, but it is not as flexible because we have to re-derive the update rule every time the σ schedule changes.
3 SPLITTING METHODS FOR GUIDED DIFFUSION MODELS
This section introduces our technique that uses splitting numerical methods to accelerate guided diffusion sampling. We first focus our investigation on classifier-guided diffusion models for classconditional generation and later demonstrate how this technique can be used for other conditional generation tasks in Section 4.3. Like any guided diffusion models, classifier-guided models (Dhariwal & Nichol, 2021) share the same training objective with regular unguided models with no modifications to the training procedure; but the sampling process is guided by an additional gradient signal from an external classifier to generate class-specific output images. Specifically, the sampling process is given by
ϵ̂ = ϵθ(xt)− √ 1− ᾱt∇x log pϕ(c|xt), xt−1 = √ ᾱt−1
( xt −
√ 1− ᾱtϵ̂√ ᾱt
) + √ 1− ᾱt−1ϵ̂, (7)
where pϕ(c|xt) is a classifier model trained to output the probability of xt belonging to class c. As discussed in the previous section, we can rewrite this formulation as a “guided ODE”:
dx̄ dσ = ϵ̄σ(x̄)−∇fσ(x̄), (8)
where fσ(x̄) = σ√σ2+1 log pϕ(c|xt). We refer to fσ as the conditional function, which can be substituted with other functions for different tasks. After obtaining the ODE form, any numerical solver mentioned earlier can be readily applied to accelerate the sampling process. However, we observe that classical high-order numerical methods (e.g., PLMS4, RK4) fail to accelerate this task (see Figure 1) and even perform worse than the baseline DDIM.
We hypothesize that the two terms in the guided ODE may have different numerical behaviors with the conditional term being less suitable to classical high-order methods. We speculate that the difference could be partly attributed to how they are computed: ∇fσ(x̄) is computed through backpropagation, whereas ϵ̄σ(x̄) is computed directly by evaluating a network. One possible solution to handle terms with different behaviors is the so-called operator splitting method, which divides the problem into two subproblems:
dy dσ = ϵ̄σ(y), dz dσ = −∇fσ(z). (9)
We call these the diffusion and condition subproblems, respectively. This method allows separating the hard-to-approximate ∇fσ(z) from ϵ̄σ(y) and solving them separately in each time step. Importantly, this helps reintroduce the effective use of high-order methods on the diffusion subproblem as well as provides us with options to combine different specialized methods to maximize performance. We explore two most famous first- and second-order splitting techniques for our task:
3.1 LIE-TROTTER SPLITTING (LTSP)
Our first example is the simple first-order Lie-Trotter splitting method (Trotter, 1959), which expresses the splitting as
dy dσ = ϵ̄σ(y), y(σn) = x̄n, σ ∈ [σn+1, σn] (10)
dz dσ = −∇fσ(z), z(σn) = y(σn+1), σ ∈ [σn+1, σn] (11)
with the solution of this step being x̄n+1 = z(σn+1). Note that σn is a decreasing sequence. Here Equation 10 is the same as Equation 3, which can be solved using any high-order numerical method,
Algorithm 1: Lie-Trotter Splitting (LTSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
yn+1 = PLMS(x̄n, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1− (σn+1−σn)∇f(yn+1) ;
end Result: x̄N
Algorithm 2: Strang Splitting (STSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
zn+1 = x̄n − (σn+1−σn)2 ∇f(x̄n) ; yn+1 = PLMS(zn+1, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1 − (σn+1−σn)2 ∇f(yn+1) ;
end Result: x̄N
e.g., PLMS. For Equation 11, we can use a forward Euler step: zn+1 = zn −∆σ∇fσ(zn). (12)
This is equivalent to a single iteration of standard gradient descent with a learning rate ∆σ. This splitting scheme is summarized by Algorithm 1. We investigate different numerical methods for each subproblem in Section 4.1.
3.2 STRANG SPLITTING (STSP)
Strang splitting (or Strang-Marchuk) (Strang, 1968) is one of the most famous and widely used operator splitting methods. This second-order splitting works as follows:
dz dσ = −∇fσ(z), z(σn) = x̄n, σ ∈
[ 1
2 (σn + σn+1), σn
] (13)
dy dσ = ϵ̄σ(y), y(σn) = z
( 1
2 (σn + σn+1)
) , σ ∈ [σn+1, σn] (14)
dz̃ dσ = −∇fσ(z̃), z̃
( 1
2 (σn + σn+1)
) = y(σn+1), σ ∈ [ σn+1, 1
2 (σn + σn+1)
] (15)
Instead of solving each subproblem for a full step length, we solve the condition subproblem for half a step before and after solving the diffusion subproblem for a full step. In theory, we can swap the order of operations without affecting convergence, but it is practically cheaper to compute the condition term twice rather than the diffusion term twice because fσ is typically a smaller network compared to ϵ̄σ . The Strange splitting algorithm is shown in Algorithm 2. This method can be shown to have better accuracy than the Lie-Trotter method, as proven in Appendix N. Although it requires evaluating the condition term twice per step in exchange for improved image quality. We assess this trade-off in the experiment section.
4 EXPERIMENTS
Extending on our observation that classical high-order methods failed on guided sampling, we conducted a series of experiments to investigate this problem and evaluate our solution. Section 4.1 uses a simple splitting method (first-order LTSP) to study the effects that high-order methods have on each subproblem, leading to our key finding that only the conditional subproblem is less suited to classical high-order methods. This section also determines the best combination of numerical methods for the two subproblems under LTSP splitting. Section 4.2 explores improvements from using a higher-order splitting method and compares our best scheme to previous work. Finally, Section 4.3 applies our approach to a variety of conditional generation tasks with minimal changes.
For our comparison, we use pre-trained state-of-the-art diffusion models and classifiers from Dhariwal & Nichol (2021), which were trained on the ImageNet dataset (Russakovsky et al., 2015) with 1,000 total sampling steps. We treat full-path samples from a classifier-guided DDIM at 1,000 steps as reference solutions. Then, the performance of each configuration is measured by the image similarity between its generated samples using fewer steps and the reference DDIM samples, both starting from the same initial noise map. Given the same sampling time, we expect configurations with better performance to better match the full DDIM. We measure image similarity using Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) (lower is better) and measure sampling time on a single NVIDIA RTX 3090 and a 24-core AMD Threadripper 3960x.
4.1 FINDING A SUITABLE NUMERICAL METHOD FOR EACH SUBPROBLEM
To study the effects of different numerical methods on each subproblem of the guided ODE (Equation 8), we use the simplest Lie-Trotter splitting, which itself requires no additional network evaluations. This controlled experiment has two setups: a) we fix the numerical method for the condition subproblem (Equation 11) to first-order PLMS1 (Euler’s method) and vary the numerical method for the diffusion subproblem (Equation 10), and conversely b) we fix the method for the diffusion subproblem and vary the method for the condition subproblem. The numerical method options are Euler’s method (PLMS1), Heun’s method (RK2), 4th order Runge-Kutta’s method (RK4), and 2nd/4th order pseudo linear multi-step (PLMS2/PLMS4). We report LPIPS vs. sampling time of various numerical combinations on a diffusion model trained on ImageNet 256×256 in Figure 2. The red dotted lines indicate a reference DDIM score obtained from 250 sampling steps, a common choice that produces good samples that are perceptually close to those from a full 1,000-step DDIM (Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021).
Given a long sampling time, non-split PLMS4 performs better than the DDIM baseline. However, when the sampling time is reduced, the image quality of PLMS4 rapidly decreases and becomes much worse than that of DDIM, especially under 15 seconds in Figure 2. When we split the ODE and solve both subproblems using first-order PLMS1 (Euler), the performance is close to that of DDIM, which is also considered first-order but without any splitting. This helps verify that merely splitting the ODE does not significantly alter the sampling speed.
In the setup a), when RK2 and RK4 are used for the diffusion subproblem, they also perform worse than the DDIM baseline. This slowdown is caused by the additional evaluations of the network by these methods, which outweigh the improvement gained in each longer diffusion step. Note that if we instead measure the image quality with respect to the number of diffusion steps, RK2 and RK4 can outperform other methods (Appendix E); however, this is not our metric of interest. On the other hand, PLMS2 and PLMS4, which require no additional network evaluations, are about 8-10% faster than DDIM and can achieve the same LPIPS score as the DDIM that uses 250 sampling steps in 20-26 fewer steps. Importantly, when the sampling time is reduced, their performance does not degrade rapidly like the non-split PLMS4 and remains at the same level as DDIM.
In the setup b) where we vary the numerical method for the condition subproblem, the result reveals an interesting contrast—none of the methods beats DDIM and some even make the sampling diverged [PLMS1, RK4]. These findings suggest that the gradients of conditional functions are less “compatible” with classical high-order methods, especially when used with a small number of steps. This phenomenon may be related to the “stiffness” condition of ODEs, which we discuss further in Section 5. For the remainder of our experiments, we will use the combination [PLMS4, PLMS1] for the diffusion and condition subproblems, respectively.
Sampling time within 5 sec. 10 sec. 15 sec. 20 sec.
DDIM 0.116 0.062 0.043 0.033 PLMS4 0.278 0.141 0.057 0.026 RK2 0.193 0.059 0.036 0.028 RK4 0.216 0.054 0.039 0.028 LTSP4 0.121 0.058 0.037 0.028 STSP4 0.079 0.035 0.022 0.013
Table 1: Average LPIPS when the sampling time is limited to be under 5 - 20 seconds.
4.2 IMPROVED SPLITTING METHOD
This experiment investigates improvements from using a high-order splitting method, specifically the Strang splitting method, with the numerical combination [PLMS4, PLMS1] and compares our methods to previous work. Note that besides DDIM Dhariwal & Nichol (2021), no previous work is specifically designed for accelerating guided sampling, thus the baselines in this comparison are only adaptations of the core numerical methods used in those papers. And to our knowledge, no prior guided-diffusion work uses splitting numerical methods. Non-split numerical method baselines are PLMS4, which is used in Liu et al. (2022), RK2, which is used in Karras et al. (2022); Lu et al. (2022), and higher-order RK4. We report the LPIPS scores of these methods with respect to the sampling time in Figure 3 and Table 1.
Without any splitting, PLMS4, RK2 and RK4 show significantly poorer image quality when used with short sampling times < 10 seconds. The best performer is our Strang splitting (STSP4), which can reach the same quality as 250-step DDIM while using 32-58% less sampling time. STSP4 also obtains the highest LPIPS scores for sample times of 5, 10, 15, and 20 seconds. More statistical details and comparison with other split combinations are in Appendix F, G.
In addition, we perform a quantitative evaluation for class-conditional generation by sampling 50,000 images based on uniformly chosen class conditions with a small number of sampling steps and evaluating the Fenchel Inception Distance (FID) Heusel et al. (2017) (lower is better) and the improved precision/recall Kynkäänniemi et al. (2019) (higher is better) against the ImageNet test set at 128, 256, and 512 resolutions. Following (Dhariwal & Nichol, 2021), we use a 25-step DDIM as a baseline, which already produces visually reasonable results. As PLMS and LTSP require the same number of network evaluations as the DDIM, they are used also with 25 steps. For STSP with a slower evaluation time, it is only allowed 20 steps, which is the highest number of steps such that its sampling time is within that of the baseline 25-step DDIM. Here LTSP2 and STSP2 are Lie-Trotter and Strang splitting methods with the combination [PLMS2, PLMS1]. In Table 2, we report the results for three different ImageNet resolutions and the average sampling time per image in seconds.
Our STSP4 performs best on all measurements except Recall on ImageNet512. On ImageNet512, PLMS4 has the highest Recall score but a poor FID of 16, indicating that the generated images have good distribution coverage but may poorly represent the real distribution. On ImageNet256, STSP4 can yield 4.49 FID in 20 steps, compared to 4.59 FID in 250 steps originally reported in the paper (Dhariwal & Nichol, 2021). Our STSP4 is about 9.4× faster when tested on the same machine.
4.3 SPLITTING METHODS IN OTHER TASKS
Besides class-conditional generation, our approach can also accelerate any conditional image generation as long as the gradient of the conditional function can be defined. We test our approach on four tasks: text-to-image generation, image inpainting, colorization, and super-resolution.
Text-to-image generation: We use a pre-trained text-to-image Disco-Diffusion (Letts et al., 2021) based on Crowson (2021), which substitutes the classifier output with the dot product of the image and caption encodings from CLIP (Radford et al., 2021). For more related experiments on StableDiffusion (Rombach et al., 2022), please refer to Appendix L, M.
Image inpainting & colorization: For these two tasks, we follow the techniques proposed in Song et al. (2020b) and Chung et al. (2022a), which improves the conditional functions of both tasks with “manifold constraints.” We use the same diffusion model Dhariwal & Nichol (2021) trained on ImageNet as our earlier Experiments 4.1, 4.2.
Super-resolution: We follow the formulation from ILVR (Choi et al., 2021) combined with the manifold contraints Chung et al. (2022a), and also use our earlier ImageNet diffusion model.
Figure 4 compares our techniques, LTSP4 and STSP4, with the DDIM baseline and PLMS4 on text-to-image generation. Each result is produced using a fixed sampling time of about 26 seconds. STSP4, which uses 30 diffusion steps compared to 45 in the other methods, produces more realistic results with color contrast that is more similar to the full DDIM references’. Figure 5 shows that our STSP4 produces more convincing results than the DDIM baseline with fewer artifacts on the other three tasks while using the same 5 second sampling time. Implementation details, quantitative evaluations, and more results are in Appendix J, K.
5 DISCUSSION
Our findings show that when the sampling ODE consists of multiple terms from different networks, their numerical behaviors can be different and treating them separately can be more optimal. Another promising direction is to improve the behavior of the gradient of the conditional function / classifier itself and study whether related properties such as adversarial robustness or gradient smoothness can induce the desirable temporal smoothness in the sampling ODE. However, it is not yet clear what specific characteristics of the behavior play an important role. This challenge may be related to a
condition called “stiffness” in solving ODEs Ernst & Gerhard (2010), which lacks a clear definition but describes the situation where explicit numerical methods, such as RK or PLMS, require a very small step size even in regions with smooth curvature.
As an alternative to the classifier-guided model, Ho & Salimans (2021) propose a classifier-free model that can perform conditional generation without a classifier while remaining a generative model. This model can utilize high-order methods as no classifier is involved, but it requires evaluating the classifier-free network twice per step, which is typically more expensive than evaluating a normal diffusion model and a classifier. It is important to note that our accelerating technique and classifier-free models are not mutually exclusive, and one can still apply a conditional function and our splitting technique to guide a classifier-free model in a direction it has not been trained for.
While our paper only focuses on ODEs derived from the deterministic sampling of DDIM, one can convert SDE-based diffusion models to ODEs (Karras et al., 2022) and still use our technique. More broadly, we can accelerate any diffusion model that can be expressed as a differential equation with a summation of two terms. When these terms behave differently, the benefit from splitting can be substantial. Nevertheless, our findings are based on common, existing models and σ schedule from Dhariwal & Nichol (2021). Further investigation into the impact of the σ schedule or different types and architectures of diffusion models is still required.
6 CONCLUSION
In this paper, we investigate the failure to accelerate guided diffusion sampling of classical highorder numerical methods and propose a solution based on splitting numerical methods. We found that the gradients of conditional functions are less suitable to classical high-order numerical methods and design a technique based on Strang splitting and a combination of forth- and first-order numerical methods. Our method achieves better LPIPS and FID scores than previous work given the same sampling time and is 32-58% faster than a 250-step DDIM baseline. Our technique can successfully accelerate a variety of tasks, such as text-to-image generation, inpainting, colorization, and super-resolution. | 1. What is the focus of the paper in terms of computational methods?
2. What are the strengths of the proposed approach in terms of efficiency and applicability?
3. What are the weaknesses of the paper regarding theoretical analysis and claims?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes an acceleration method of guided diffusion sampling based on splitting numerical methods. Based on the finding that the high-order numerical methods are unsuitable for the conditional function, it develops a method based on Strang splitting and a combination of fourth and first-order numerical methods. Experimental results show that the proposed can accelerate the guided diffusion sampling.
Strengths And Weaknesses
Strength:
The paper proposes a simple method that accelerates the guided diffusion sampling. The proposed method is efficient and can be applied to other problems, e.g., super-resolution, colorization, for acceleration. Experimental analysis shows the effectiveness of the proposed method.
Weaknesses:
I mainly concern the limited theoretical analysis about the proposed method.
The paper claims that the high-order numerical methods are unsuitable for the conditional function. However, it does not explain this finding clearly. It would be better to provide a theoretical analysis of this finding.
The paper claims that the Strange splitting algorithm can be proved to have better accuracy. However, there is no theoretical analysis.
Clarity, Quality, Novelty And Reproducibility
The paper is well written. |
ICLR | Title
Accelerating Guided Diffusion Sampling with Splitting Numerical Methods
Abstract
Guided diffusion is a technique for conditioning the output of a diffusion model at sampling time without retraining the network for each specific task. However, one drawback of diffusion models, whether they are guided or unguided, is their slow sampling process. Recent techniques can accelerate unguided sampling by applying high-order numerical methods to the sampling process when viewed as differential equations. On the contrary, we discover that the same techniques do not work for guided sampling, and little has been explored about its acceleration. This paper explores the culprit of this problem and provides a solution based on operator splitting methods, motivated by our key finding that classical highorder numerical methods are unsuitable for the conditional function. Our proposed method can re-utilize the high-order methods for guided sampling and can generate images with the same quality as a 250-step DDIM baseline using 32-58% less sampling time on ImageNet256. We also demonstrate usage on a wide variety of conditional generation tasks, such as text-to-image generation, colorization, inpainting, and super-resolution.
1 INTRODUCTION
A family of generative models known as diffusion models has recently gained a lot of attention with state-of-the-art image generation quality (Dhariwal & Nichol, 2021). Guided diffusion is an approach for controlling the output of a trained diffusion model for conditional generation tasks without retraining its network. By engineering a task-specific conditional function and modifying only the sampling procedure, guided diffusion models can be used in a variety of applications, such as class-conditional image generation (Dhariwal & Nichol, 2021; Kawar et al., 2022), text-to-image generation (Nichol et al., 2022), image-to-image translation (Zhao et al., 2022), inpainting (Chung et al., 2022a), colorization (Song et al., 2020b), image composition (Sasaki et al., 2021), adversarial purification (Wang et al., 2022; Wu et al., 2022) and super-resolution (Choi et al., 2021).
One common drawback of both guided and regular “unguided” diffusion models is their slow sampling processes, usually requiring hundreds of iterations to produce a single image. Recent speedup attempts include improving the noise schedule (Nichol & Dhariwal, 2021; Watson et al., 2021), redefining the diffusion process to be non-Markovian, thereby allowing a deterministic sampling process Song et al. (2020a), network distillation that teaches a student model to simulate multiple sampling steps of a teacher model Salimans & Ho (2022); Luhman & Luhman (2021), among others. Song et al. (2020a) show how each sampling step can be expressed as a first-order numerical step of an ordinary differential equation (ODE). Similarly, Song et al. (2020b) express the sampling of a score-based model as solving a stochastic differential equation (SDE). By regarding the sampling process as an ODE/SDE, many high-order numerical methods have been suggested, such as Liu et al. (2022), Zhang & Chen (2022), and Zhang et al. (2022) with impressive results on unguided diffusion models. However, when applied to guided diffusion models, these methods produce surprisingly poor results (see Figure 1)—given a few number of steps, those high-order numerical methods actually perform worse than low-order methods.
Guided sampling differs from the unguided one by the addition of the gradients of the conditional function to its sampling equation. The observed performance decline thus suggests that classical high-order methods may not be suitable for the conditional function and, consequently, the guided
sampling equation as a whole. Our paper tests this hypothesis and presents an approach to accelerating guided diffusion sampling. The key idea is to use an operator splitting method to split the less well-behaved conditional function term from the standard diffusion term and solve them separately. This approach not only allows re-utilizing the successful high-order methods on the diffusion term but also provides us with options to combine different specialized methods for each term to maximize performance. Note that splitting methods have also been explored by Dockhorn et al. (2022) to solve unguided diffusion SDEs, but our work focuses on accelerating guided diffusion ODEs.
Our design process includes comparing different splitting methods and numerical methods for each split term. When tested on ImageNet, our approach achieves the same level of image quality as a DDIM baseline while reducing the sampling time by approximately 32-58%. Compared with other sampling methods using the same sampling time, our approach provides better image quality as measured by LPIPS, FID, and Perception/Recall. With only minimal modifications to the sampling equation, we also show successful acceleration on various conditional generation tasks.
2 BACKGROUND
This section provides a high-level summary of the theoretical foundation of diffusion models as well as numerical methods that have been used for diffusion models. Here we briefly explain a few that contribute to our method.
2.1 DIFFUSION MODELS
Assuming that x0 is a random variable from the data distribution we wish to reproduce, diffusion models define a sequence of Gaussian noise degradation of x0 as random variables x1, x2, ..., xT , where xt ∼ N ( √ 1− βtxt−1, βtI) and βt ∈ [0, 1] are parameters that control the noise levels. With a property of Gaussian distribution, we can express xt directly as a function of x0 and noise ϵ ∼ N (0, I) by xt = √ ᾱtx0+ √ 1− ᾱtϵ, where ᾱt = ∏t i=1(1−βi). By picking a sufficiently large T (e.g., 1,000) and an appropriate set of βt, we can assume xT is a standard Gaussian distribution. The main idea of diffusion model generation is to sample a Gaussian noise xT and use it to reversely sample xT−1, xT−2, ... until we obtain x0, which belongs to our data distribution.
Ho et al. (2020) propose Denoising Diffusion Probabilistic Model (DDPM) and explain how to employ a neural network ϵθ(xt, t) to predict the noise ϵ that is used to compute xt. To train the network, we sample a training image x0, t, and ϵ to compute xt using the above relationship. Then, we optimize our network ϵθ to minimize the difference between the predicted and real noise, i.e., ∥ϵ− ϵθ(xt, t)∥2.
Song et al. (2020a) introduce Denoising Diffusion Implicit Model (DDIM), which uses the network ϵθ to deterministically obtain xt−1 given xt. The DDIM generative process can be written as
xt−1 = √ ᾱt−1 ᾱt ( xt − √ 1− ᾱtϵθ(xt, t) ) + √ 1− ᾱt−1ϵθ(xt, t). (1)
This formulation could be used to skip many sampling steps and boost sampling speed. To turn this into an ODE, we rewrite Equation 1 as:
xt−∆t√ ᾱt−∆t = xt√ ᾱt + (√ 1− ᾱt−∆t ᾱt−∆t − √ 1− ᾱt ᾱt ) ϵθ(xt, t), (2)
which is now equivalent to a numerical step in solving an ODE. To derive the corresponding ODE, we can re-parameterize σt = √ 1− ᾱt/ √ ᾱt, x̄(t) = xt/ √ ᾱt and ϵ̄σ(x̄) = ϵθ(xt, t), yielding x̄(t−∆t)− x̄(t) = (σt−∆t − σt)ϵ̄σ(x̄). By letting (σt−∆t − σt) → 0, the ODE becomes:
dx̄ dσ = ϵ̄σ(x̄). (3)
Note that this change of variables is equivalent to an exponential integrator technique described in both Zhang & Chen (2022) and Lu et al. (2022). Since xt and x̄(t) have the same value at t = 0, our work can focus on solving x̄(t) rather than xt. Many numerical methods can be applied to the ODE Equation 3 to accelerate diffusion sampling. We next discuss some of them that are relevant.
2.2 NUMERICAL METHODS
Euler’s Method is the most basic numerical method. A forward Euler step is given by x̄n+1 = x̄n + ∆σϵ̄σ(x̄n). When the forward Euler step is applied to the ODE Equation 3, we obtain the DDIM formulation (Song et al., 2020a).
Heun’s Method, also known as the trapezoid rule or improved Euler, is given by: x̄n+1 = x̄n + ∆σ 2 (e1+ e2), where e1 = ϵ̄σ(x̄n) and e2 = ϵ̄σ(x̄n+∆σe1). This method splits Euler’s method into two steps to improve accuracy. Many papers have used this method on diffusion models, including Algorithm 1 in Karras et al. (2022) and DPM-Solver-2 in Lu et al. (2022). This method is also the simplest case of Predictor-Corrector methods used in Song et al. (2020b).
Runge-Kutta Methods represent a class of numerical methods that integrate information from multiple hidden steps and provide high accuracy results. Heun’s method also belongs to a family of 2nd-order Runge-Kutta methods (RK2). The most well-known variant is the 4th-order Runge-Kutta method (RK4), which is written as follows:
e1 = ϵ̄σ(x̄n), e2 = ϵ̄σ ( x̄n + ∆σ
2 e1
) , e3 = ϵ̄σ ( x̄n + ∆σ
2 e2
) , e4 = ϵ̄σ (x̄n +∆σe3) ,
x̄n+1 = x̄n + ∆σ
6 (e1 + 2e2 + 2e3 + e4). (4)
This method has been tested on diffusion models in Liu et al. (2022) and Salimans & Ho (2022), but it has not been used as the main proposed method in any paper.
Linear Multi-Step Method, similar to the Runge-Kutta methods, aims to combine information from several steps. However, rather than evaluating new hidden steps, this method uses the previous steps to estimate the new step. The 1st-order formulation is the same as Euler’s method. The 2nd-order formulation is given by
x̄n+1 = x̄n + ∆σ
2 (3e0 − e1) , (5)
while the 4th-order formulation is given by
x̄n+1 = x̄n + ∆σ
24 (55e0 − 59e1 + 37e2 − 9e3), (6)
where ek = ϵ̄σ(x̄n−k). These formulations are designed for a constant ∆σ in each step. However, our experiments and previous work that uses this method (e.g., Liu et al. (2022); Zhang & Chen
(2022)) still show good results when this assumption is not strictly satisfied, i.e., when ∆σ is not constant. We will refer to these formulations as PLMS (Pseudo Linear Multi-Step) for the rest of the paper, like in Liu et al. (2022). A similar linear multi-step method for non-constant ∆σ can also be derived using a technique used in Zhang & Chen (2022), which we detail in Appendix B. This non-constant version can improve upon PLMS slightly, but it is not as flexible because we have to re-derive the update rule every time the σ schedule changes.
3 SPLITTING METHODS FOR GUIDED DIFFUSION MODELS
This section introduces our technique that uses splitting numerical methods to accelerate guided diffusion sampling. We first focus our investigation on classifier-guided diffusion models for classconditional generation and later demonstrate how this technique can be used for other conditional generation tasks in Section 4.3. Like any guided diffusion models, classifier-guided models (Dhariwal & Nichol, 2021) share the same training objective with regular unguided models with no modifications to the training procedure; but the sampling process is guided by an additional gradient signal from an external classifier to generate class-specific output images. Specifically, the sampling process is given by
ϵ̂ = ϵθ(xt)− √ 1− ᾱt∇x log pϕ(c|xt), xt−1 = √ ᾱt−1
( xt −
√ 1− ᾱtϵ̂√ ᾱt
) + √ 1− ᾱt−1ϵ̂, (7)
where pϕ(c|xt) is a classifier model trained to output the probability of xt belonging to class c. As discussed in the previous section, we can rewrite this formulation as a “guided ODE”:
dx̄ dσ = ϵ̄σ(x̄)−∇fσ(x̄), (8)
where fσ(x̄) = σ√σ2+1 log pϕ(c|xt). We refer to fσ as the conditional function, which can be substituted with other functions for different tasks. After obtaining the ODE form, any numerical solver mentioned earlier can be readily applied to accelerate the sampling process. However, we observe that classical high-order numerical methods (e.g., PLMS4, RK4) fail to accelerate this task (see Figure 1) and even perform worse than the baseline DDIM.
We hypothesize that the two terms in the guided ODE may have different numerical behaviors with the conditional term being less suitable to classical high-order methods. We speculate that the difference could be partly attributed to how they are computed: ∇fσ(x̄) is computed through backpropagation, whereas ϵ̄σ(x̄) is computed directly by evaluating a network. One possible solution to handle terms with different behaviors is the so-called operator splitting method, which divides the problem into two subproblems:
dy dσ = ϵ̄σ(y), dz dσ = −∇fσ(z). (9)
We call these the diffusion and condition subproblems, respectively. This method allows separating the hard-to-approximate ∇fσ(z) from ϵ̄σ(y) and solving them separately in each time step. Importantly, this helps reintroduce the effective use of high-order methods on the diffusion subproblem as well as provides us with options to combine different specialized methods to maximize performance. We explore two most famous first- and second-order splitting techniques for our task:
3.1 LIE-TROTTER SPLITTING (LTSP)
Our first example is the simple first-order Lie-Trotter splitting method (Trotter, 1959), which expresses the splitting as
dy dσ = ϵ̄σ(y), y(σn) = x̄n, σ ∈ [σn+1, σn] (10)
dz dσ = −∇fσ(z), z(σn) = y(σn+1), σ ∈ [σn+1, σn] (11)
with the solution of this step being x̄n+1 = z(σn+1). Note that σn is a decreasing sequence. Here Equation 10 is the same as Equation 3, which can be solved using any high-order numerical method,
Algorithm 1: Lie-Trotter Splitting (LTSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
yn+1 = PLMS(x̄n, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1− (σn+1−σn)∇f(yn+1) ;
end Result: x̄N
Algorithm 2: Strang Splitting (STSP) sample x̄0 ∼ N (0, σ2maxI) ; for n ∈ {0, ..., N − 1} do
zn+1 = x̄n − (σn+1−σn)2 ∇f(x̄n) ; yn+1 = PLMS(zn+1, σn, σn+1, ϵ̄σ); x̄n+1 = yn+1 − (σn+1−σn)2 ∇f(yn+1) ;
end Result: x̄N
e.g., PLMS. For Equation 11, we can use a forward Euler step: zn+1 = zn −∆σ∇fσ(zn). (12)
This is equivalent to a single iteration of standard gradient descent with a learning rate ∆σ. This splitting scheme is summarized by Algorithm 1. We investigate different numerical methods for each subproblem in Section 4.1.
3.2 STRANG SPLITTING (STSP)
Strang splitting (or Strang-Marchuk) (Strang, 1968) is one of the most famous and widely used operator splitting methods. This second-order splitting works as follows:
dz dσ = −∇fσ(z), z(σn) = x̄n, σ ∈
[ 1
2 (σn + σn+1), σn
] (13)
dy dσ = ϵ̄σ(y), y(σn) = z
( 1
2 (σn + σn+1)
) , σ ∈ [σn+1, σn] (14)
dz̃ dσ = −∇fσ(z̃), z̃
( 1
2 (σn + σn+1)
) = y(σn+1), σ ∈ [ σn+1, 1
2 (σn + σn+1)
] (15)
Instead of solving each subproblem for a full step length, we solve the condition subproblem for half a step before and after solving the diffusion subproblem for a full step. In theory, we can swap the order of operations without affecting convergence, but it is practically cheaper to compute the condition term twice rather than the diffusion term twice because fσ is typically a smaller network compared to ϵ̄σ . The Strange splitting algorithm is shown in Algorithm 2. This method can be shown to have better accuracy than the Lie-Trotter method, as proven in Appendix N. Although it requires evaluating the condition term twice per step in exchange for improved image quality. We assess this trade-off in the experiment section.
4 EXPERIMENTS
Extending on our observation that classical high-order methods failed on guided sampling, we conducted a series of experiments to investigate this problem and evaluate our solution. Section 4.1 uses a simple splitting method (first-order LTSP) to study the effects that high-order methods have on each subproblem, leading to our key finding that only the conditional subproblem is less suited to classical high-order methods. This section also determines the best combination of numerical methods for the two subproblems under LTSP splitting. Section 4.2 explores improvements from using a higher-order splitting method and compares our best scheme to previous work. Finally, Section 4.3 applies our approach to a variety of conditional generation tasks with minimal changes.
For our comparison, we use pre-trained state-of-the-art diffusion models and classifiers from Dhariwal & Nichol (2021), which were trained on the ImageNet dataset (Russakovsky et al., 2015) with 1,000 total sampling steps. We treat full-path samples from a classifier-guided DDIM at 1,000 steps as reference solutions. Then, the performance of each configuration is measured by the image similarity between its generated samples using fewer steps and the reference DDIM samples, both starting from the same initial noise map. Given the same sampling time, we expect configurations with better performance to better match the full DDIM. We measure image similarity using Learned Perceptual Image Patch Similarity (LPIPS) (Zhang et al., 2018) (lower is better) and measure sampling time on a single NVIDIA RTX 3090 and a 24-core AMD Threadripper 3960x.
4.1 FINDING A SUITABLE NUMERICAL METHOD FOR EACH SUBPROBLEM
To study the effects of different numerical methods on each subproblem of the guided ODE (Equation 8), we use the simplest Lie-Trotter splitting, which itself requires no additional network evaluations. This controlled experiment has two setups: a) we fix the numerical method for the condition subproblem (Equation 11) to first-order PLMS1 (Euler’s method) and vary the numerical method for the diffusion subproblem (Equation 10), and conversely b) we fix the method for the diffusion subproblem and vary the method for the condition subproblem. The numerical method options are Euler’s method (PLMS1), Heun’s method (RK2), 4th order Runge-Kutta’s method (RK4), and 2nd/4th order pseudo linear multi-step (PLMS2/PLMS4). We report LPIPS vs. sampling time of various numerical combinations on a diffusion model trained on ImageNet 256×256 in Figure 2. The red dotted lines indicate a reference DDIM score obtained from 250 sampling steps, a common choice that produces good samples that are perceptually close to those from a full 1,000-step DDIM (Dhariwal & Nichol, 2021; Nichol & Dhariwal, 2021).
Given a long sampling time, non-split PLMS4 performs better than the DDIM baseline. However, when the sampling time is reduced, the image quality of PLMS4 rapidly decreases and becomes much worse than that of DDIM, especially under 15 seconds in Figure 2. When we split the ODE and solve both subproblems using first-order PLMS1 (Euler), the performance is close to that of DDIM, which is also considered first-order but without any splitting. This helps verify that merely splitting the ODE does not significantly alter the sampling speed.
In the setup a), when RK2 and RK4 are used for the diffusion subproblem, they also perform worse than the DDIM baseline. This slowdown is caused by the additional evaluations of the network by these methods, which outweigh the improvement gained in each longer diffusion step. Note that if we instead measure the image quality with respect to the number of diffusion steps, RK2 and RK4 can outperform other methods (Appendix E); however, this is not our metric of interest. On the other hand, PLMS2 and PLMS4, which require no additional network evaluations, are about 8-10% faster than DDIM and can achieve the same LPIPS score as the DDIM that uses 250 sampling steps in 20-26 fewer steps. Importantly, when the sampling time is reduced, their performance does not degrade rapidly like the non-split PLMS4 and remains at the same level as DDIM.
In the setup b) where we vary the numerical method for the condition subproblem, the result reveals an interesting contrast—none of the methods beats DDIM and some even make the sampling diverged [PLMS1, RK4]. These findings suggest that the gradients of conditional functions are less “compatible” with classical high-order methods, especially when used with a small number of steps. This phenomenon may be related to the “stiffness” condition of ODEs, which we discuss further in Section 5. For the remainder of our experiments, we will use the combination [PLMS4, PLMS1] for the diffusion and condition subproblems, respectively.
Sampling time within 5 sec. 10 sec. 15 sec. 20 sec.
DDIM 0.116 0.062 0.043 0.033 PLMS4 0.278 0.141 0.057 0.026 RK2 0.193 0.059 0.036 0.028 RK4 0.216 0.054 0.039 0.028 LTSP4 0.121 0.058 0.037 0.028 STSP4 0.079 0.035 0.022 0.013
Table 1: Average LPIPS when the sampling time is limited to be under 5 - 20 seconds.
4.2 IMPROVED SPLITTING METHOD
This experiment investigates improvements from using a high-order splitting method, specifically the Strang splitting method, with the numerical combination [PLMS4, PLMS1] and compares our methods to previous work. Note that besides DDIM Dhariwal & Nichol (2021), no previous work is specifically designed for accelerating guided sampling, thus the baselines in this comparison are only adaptations of the core numerical methods used in those papers. And to our knowledge, no prior guided-diffusion work uses splitting numerical methods. Non-split numerical method baselines are PLMS4, which is used in Liu et al. (2022), RK2, which is used in Karras et al. (2022); Lu et al. (2022), and higher-order RK4. We report the LPIPS scores of these methods with respect to the sampling time in Figure 3 and Table 1.
Without any splitting, PLMS4, RK2 and RK4 show significantly poorer image quality when used with short sampling times < 10 seconds. The best performer is our Strang splitting (STSP4), which can reach the same quality as 250-step DDIM while using 32-58% less sampling time. STSP4 also obtains the highest LPIPS scores for sample times of 5, 10, 15, and 20 seconds. More statistical details and comparison with other split combinations are in Appendix F, G.
In addition, we perform a quantitative evaluation for class-conditional generation by sampling 50,000 images based on uniformly chosen class conditions with a small number of sampling steps and evaluating the Fenchel Inception Distance (FID) Heusel et al. (2017) (lower is better) and the improved precision/recall Kynkäänniemi et al. (2019) (higher is better) against the ImageNet test set at 128, 256, and 512 resolutions. Following (Dhariwal & Nichol, 2021), we use a 25-step DDIM as a baseline, which already produces visually reasonable results. As PLMS and LTSP require the same number of network evaluations as the DDIM, they are used also with 25 steps. For STSP with a slower evaluation time, it is only allowed 20 steps, which is the highest number of steps such that its sampling time is within that of the baseline 25-step DDIM. Here LTSP2 and STSP2 are Lie-Trotter and Strang splitting methods with the combination [PLMS2, PLMS1]. In Table 2, we report the results for three different ImageNet resolutions and the average sampling time per image in seconds.
Our STSP4 performs best on all measurements except Recall on ImageNet512. On ImageNet512, PLMS4 has the highest Recall score but a poor FID of 16, indicating that the generated images have good distribution coverage but may poorly represent the real distribution. On ImageNet256, STSP4 can yield 4.49 FID in 20 steps, compared to 4.59 FID in 250 steps originally reported in the paper (Dhariwal & Nichol, 2021). Our STSP4 is about 9.4× faster when tested on the same machine.
4.3 SPLITTING METHODS IN OTHER TASKS
Besides class-conditional generation, our approach can also accelerate any conditional image generation as long as the gradient of the conditional function can be defined. We test our approach on four tasks: text-to-image generation, image inpainting, colorization, and super-resolution.
Text-to-image generation: We use a pre-trained text-to-image Disco-Diffusion (Letts et al., 2021) based on Crowson (2021), which substitutes the classifier output with the dot product of the image and caption encodings from CLIP (Radford et al., 2021). For more related experiments on StableDiffusion (Rombach et al., 2022), please refer to Appendix L, M.
Image inpainting & colorization: For these two tasks, we follow the techniques proposed in Song et al. (2020b) and Chung et al. (2022a), which improves the conditional functions of both tasks with “manifold constraints.” We use the same diffusion model Dhariwal & Nichol (2021) trained on ImageNet as our earlier Experiments 4.1, 4.2.
Super-resolution: We follow the formulation from ILVR (Choi et al., 2021) combined with the manifold contraints Chung et al. (2022a), and also use our earlier ImageNet diffusion model.
Figure 4 compares our techniques, LTSP4 and STSP4, with the DDIM baseline and PLMS4 on text-to-image generation. Each result is produced using a fixed sampling time of about 26 seconds. STSP4, which uses 30 diffusion steps compared to 45 in the other methods, produces more realistic results with color contrast that is more similar to the full DDIM references’. Figure 5 shows that our STSP4 produces more convincing results than the DDIM baseline with fewer artifacts on the other three tasks while using the same 5 second sampling time. Implementation details, quantitative evaluations, and more results are in Appendix J, K.
5 DISCUSSION
Our findings show that when the sampling ODE consists of multiple terms from different networks, their numerical behaviors can be different and treating them separately can be more optimal. Another promising direction is to improve the behavior of the gradient of the conditional function / classifier itself and study whether related properties such as adversarial robustness or gradient smoothness can induce the desirable temporal smoothness in the sampling ODE. However, it is not yet clear what specific characteristics of the behavior play an important role. This challenge may be related to a
condition called “stiffness” in solving ODEs Ernst & Gerhard (2010), which lacks a clear definition but describes the situation where explicit numerical methods, such as RK or PLMS, require a very small step size even in regions with smooth curvature.
As an alternative to the classifier-guided model, Ho & Salimans (2021) propose a classifier-free model that can perform conditional generation without a classifier while remaining a generative model. This model can utilize high-order methods as no classifier is involved, but it requires evaluating the classifier-free network twice per step, which is typically more expensive than evaluating a normal diffusion model and a classifier. It is important to note that our accelerating technique and classifier-free models are not mutually exclusive, and one can still apply a conditional function and our splitting technique to guide a classifier-free model in a direction it has not been trained for.
While our paper only focuses on ODEs derived from the deterministic sampling of DDIM, one can convert SDE-based diffusion models to ODEs (Karras et al., 2022) and still use our technique. More broadly, we can accelerate any diffusion model that can be expressed as a differential equation with a summation of two terms. When these terms behave differently, the benefit from splitting can be substantial. Nevertheless, our findings are based on common, existing models and σ schedule from Dhariwal & Nichol (2021). Further investigation into the impact of the σ schedule or different types and architectures of diffusion models is still required.
6 CONCLUSION
In this paper, we investigate the failure to accelerate guided diffusion sampling of classical highorder numerical methods and propose a solution based on splitting numerical methods. We found that the gradients of conditional functions are less suitable to classical high-order numerical methods and design a technique based on Strang splitting and a combination of forth- and first-order numerical methods. Our method achieves better LPIPS and FID scores than previous work given the same sampling time and is 32-58% faster than a 250-step DDIM baseline. Our technique can successfully accelerate a variety of tasks, such as text-to-image generation, inpainting, colorization, and super-resolution. | 1. What is the focus of the paper regarding classifier-guided diffusion models?
2. What are the strengths and weaknesses of the proposed approach in accelerating the sampling process?
3. Do you have any concerns or questions about the comparison between classifier-guided generation and classifier-free generation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper considers the problem of accelerating the sampling process of the classifier-guided diffusion models. The classifier-guided diffusion consists of two components: the conditional function (gradient of classifier) and the standard diffusion term. The authors observe that high-order numerical solvers can lead to degraded performance when sampling with the conditional function. To alleviate the issue while speed up the sampling, they opt to use higher-order method for standard diffusion term and forward Euler method for conditional function. They combine the two interleaved procedures through operator splitting methods. Experimentally, the proposed sampling method outperforms DDIM in speed and sample quality.
Strengths And Weaknesses
Strong points
The paper applies the operator splitting method to the classifier-guided generation. They study various combinations of solvers of the individual parts, and the best combination outperforms the baseline DDIM.
The proposed method consistently improved over the baselines across image resolutions and tasks.
Weak points
The current dominating method is classifier-free generation, such as stable diffusion. The problem of classifier-guided diffusion models seems to have lower practical value. As the authors mention in Section 5, classifier-free involves more expensive function evaluation per step. However, they allow higher-order solvers for all components. Could the authors compare the FID/sampling time of the two methods, in order to demonstrate the utility of classifier-guided generation. (I understand it could be hard to complete the experiments in the short rebuttal time, but it really boosts the utility of classifier-guided generation if it has certain advantages.)
The "stiffness" reasoning of the condition function is vague and insufficient. The phenomenon is the major point in the paper, and it would be very helpful to dive deeper into this problem: why does the condition function perform better when paired with a simpler ODE solver in practice? It's also a bit counter-intuitive when the function is stiff for higher-order solvers but not for simpler ones. Could the authors give some illustrative examples? The review has one plausible hypothesis: the gradient of classifier (condition function) has large variance or changes rapidly across different
σ
. Hence, it's harmful to combine the evaluations along the ODE trajectory.
The evaluation protocol in Section 4.1 is a bit problematic. The generated samples are compared against DDIM w/ 1000 steps. It assumes that the samples generated by DDIM w/ 1000 steps are true samples. As we see in hindsight, there are many samplers better than DDIM, and they could generate better samples while incurring high LPIPS. Also, in this setup, the goal is to purely decrease the ODE simulation error (there is nothing to do with network estimation error). It's counter-intuitive that higher-order methods do not work out under such a metric.
Writing suggestions
Below are some points that could potentially make the paper more readable and consistent:
Section 3.1 & 3.2: the
σ
_s is increasing over time. But normally, they should decrease over time during sampling.
Section 4, page 5, "with 1000 total steps" - do you mean "with 1000 total sampling steps"?
Section 4.1: It would be helpful to change PLMS1 to Euler's method.
Clarity, Quality, Novelty And Reproducibility
The paper is generally well-written. The demonstrated phenomenon of the paper remains mysterious. It would be helpful to elaborate on the behavior of the condition function. Intuitively, since we want to minimize the discretization error of an ODE, a higher order method could lead to better performance. |
ICLR | Title
Symmetric Machine Theory of Mind
Abstract
Theory of mind (ToM), the ability to understand others’ thoughts and desires, is a cornerstone of human intelligence. Because of this, a number of previous works have attempted to measure the ability of machines to develop a theory of mind, with one agent attempting to understand anothers’ internal “mental state”. However, ToM agents are often tested as passive observers or in tasks with specific predefined roles, such as speaker-listener scenarios. In this work, we propose to model machine theory of mind in a more flexible and symmetric scenario; a multiagent environment SymmToM where all agents can speak, listen, see other agents, and move freely through a grid world. An effective strategy to solve SymmToM requires developing theory of mind to maximize each agent’s rewards. We show that multi-agent deep reinforcement learning models that model the mental states of other agents achieve significant performance improvements over agents with no such ToM model. At the same time, our best agents fail to achieve performance comparable to agents with access to the gold-standard mental state of other agents, demonstrating that the modeling of theory of mind in multi-agent scenarios is very much an open challenge.
1 INTRODUCTION
Human communication is shaped by the desire to efficiently cooperate and achieve communicative goals (Tomasello, 2009). Children learn from a young age that the others they interact with have independent mental states, and therefore communicating is necessary to obtain information from or shape the intentions of those they interact with. Remembering and reasoning over others’ mental states ensures efficient communication by avoiding having to repeat information, and in cases where cooperation is involved contributes to achieving a common goal with minimal effort.
Because of this, there is growing interest in developing agents that can exhibit this kind of behavior, referred to as Theory of Mind (ToM) by developmental psychologists (Premack & Woodruff, 1978).1 Previous work on agents imbued with some capability of ToM has focused mainly on two types of tasks. The former are tasks where the agent is a passive observer of a scene that has to predict the future by reasoning over others’ mental states. These tasks may involve natural language (Nematzadeh et al., 2018) or be purely spatial (Gandhi et al., 2021; Rabinowitz et al., 2018; Baker et al., 2011). The latter are tasks where the ToM agent has a specific role, such as “the speaker” in speaker-listener scenarios (Zhu et al., 2021).
In contrast, human cooperation and communication is very often multi-party, and rarely assumes that people have pre-fixed roles. Moreover, human interlocutors are seldom passive observers of a scene but instead proactively interact with their environment. Since previous domains limited us to research questions where most parties involved did not have an active role, we developed a more flexible environment where we can now study what happens when all participants must act as both speaker and listener. In this paper, we present SymmToM, a fully symmetric multi-agent environment where all agents can see, hear, speak, and move, and are active players of a simple information-gathering game. To solve SymmToM, agents need to exhibit different levels of ToM, as well as efficiently communicate through a simple channel with a fixed set of symbols.
1In the present work we focus solely on reasoning over mental states. Other aspects of ToM include understanding preferences, goals, and desires of others. Multi-agent scenarios for inferring agent’s goals have been studied (Ullman et al., 2009), and passive-observer benchmarks (Gandhi et al., 2021; Shu et al., 2021; Netanyahu* et al., 2021) have been proposed for evaluating understanding of agent’s goals and preferences.
SymmToM is a partially observable setting for all agents: even when agents have full vision, hearing may be limited. This also differentiates SymmToM from prior work, as modeling may require probabilistic theory of mind. In other words, agents need to not only remember and infer other agents’ knowledge based on what they saw, but also estimate the probability that certain events happened. This estimation may be performed by assuming other agents’ optimal behavior and processing the partial information available. Despite its simplicity, SymmToM fulfills the properties required for symmetric ToM to arise, which will be discussed in the following section.
We find that SymmToM cannot be completely solved neither by using well-known multi-agent deep reinforcement learning (RL) models, nor by tailoring those models to our task. We show that even maintaining the simple rules of the environment, modifying its parameters results in much more difficult challenges, even for models where we artificially introduce perfect information. We discuss examples where different levels of theory of mind are required to solve the task, and possible metrics.
2 THEORY-OF-MIND AGENTS
A belief Theory-of-Mind agent can be defined as a modification of the standard multi-agent RL paradigm, where the agents’ policies are conditioned on their beliefs about others. Formally, we define a reinforcement learning problem M as a tuple of a state space S, action space A, state transition probability function T ∈ S × A → R, and reward function R ∈ S × A → R, i.e.M := 〈S,A, T,R〉. In this setting, an agent learns a (possibly probabilistic) policy π : S → A that maps from states to actions, with the goal of maximizing reward.
In a multi-agent RL setting each agent can potentially have its own state space, action space, transition probabilities, and reward function, so we can define an instance ofMi = 〈Si,Ai, Ti, Ri〉 for each agent i. For convenience, we can also define a joint state space S = ⋃ i Si that describes the entire world in which all agents are interacting. Importantly, in this setting each agent will have its own view of the entirety of the world, described by a conditional observation function ωi : S → Ωi that maps from the state of the entire environment to only the information observable by agent i.
As elaborated above, ToM is the ability to know (and act upon) the knowledge that an agent has. Agents with no ToM will follow a policy that depends only on their current (potentially partial or noisy) observation of their environment: πi(ai,t | ωi(st)). Agents with zeroth order ToM can reason over their own knowledge. These agents will be stateful, πi(· | ωi(st), h(i)t ), where h (i) t is i’s hidden state. Hidden states are always accessible to their owner, i.e. i has access to h(i)t .
Agents with capabilities of reasoning over other agents’ mental states will need to estimate h(j)t for j 6= i. We will denote the estimation that i does of j’s mental state in time t as ĥ(i,j)t :
πi(· | ωi(st), h(i)t , ĥ (i,1) t , . . . ĥ (i,i−1) t , ĥ (i,i+1) t . . . ĥ (n) t )
How do we estimate ĥ(i,j)t ? As a function of i’s (the predicting agent) previous hidden state t−1, i’s observation in t−1, and i’s prediction of the hidden states of every agent in the previous turn:
ĥ (i+1) t = f(h (i) t−1, ωi(st−1), ĥ (i,1) t−1 , . . . ĥ (i,i−1) t−1 , ĥ (i,i+1) t−1 . . . ĥ (i,n) t−1 )
i’s prediction of other agents’ observation in t−1 is also crucial, but not explicitly mentioned since it can be computed using ωi(st−1). For the initial turn, ĥ (i,j) 0 may be initialized differently depending on the problem: if initial knowledge is public, ĥ(i,j)0 is trivial; if not, ĥ (i,j) 0 may be estimated.
3 SYMMETRIC THEORY-OF-MIND
We define symmetric theory of mind environments as settings where theory of mind is required to perform a task successfully, and all agents have the same abilities. There are at least four defining characteristics of symmetric ToM to arise:
Symmetric action space. In symmetric ToM all agents are required to have the same action space (in contrast to, for example, ToM tasks in speaker-listener settings). Concretely,Ai = Aj 6= ∅ ∀i, j.
Imperfect information. In perfect information scenarios all knowledge is public, making it impossible to have agents with different mental states. In ToM tasks in general, there could be a subset of agents with perfect information: one example would a passive observer that needs to predict future behavior. In symmetric ToM, since all agents have the same abilities and roles, all agents must have imperfect information. More precisely, ωi must not be the identity for any agent i.
Observation of others. Agents must have at least partial information of another agent to estimate its mental state. In contrast to passive-observer settings, in symmetric ToM every agent must be able to partially observe all others. More precisely, ωi must observe at least partial information about s (j) t (the subset of st that refers to agent j), although we do not require s (j) t 6= ∅ in every single turn. Moreover, if communication is allowed, it is desirable to partially observe or infer interactions between two or more agents to develop second order ToM (i.e. predicting what an agent thinks about what another agent is thinking) or higher.
Information-seeking behavior. It should be relevant for successfully performing the task to gather as much information as possible, and this information-gathering should involve some level of reasoning over other agent’s knowledge. This is true for first-order ToM tasks in general, and could be formalized as π∗ 6= π for any zeroth-order ToM policy πi(· | ωi(st), h(i)t ). Furthermore, it would be desirable to design a task with perpetual information seeking behavior, since it would ensure that all agents have an incentive to play efficiently even in long episodes. If one wants to design a task with perpetual information-seeking and finite knowledge, information must be forgotten eventually. A forgetting mechanism could be implemented as an explicit loss of knowledge under specific conditions, or make remembrances less reliable or noisy. Moreover, this introduces the concept of information staleness. Since information is not cumulative and the environment is only partially observable, agents will need to estimate whether what they knew to be true still holds in the present.
4 THE SYMMTOM ENVIRONMENT
SymmToM is an environment where a agents are placed in a w×w grid world, and attempt to maximize their reward by gathering all the information available in the environment. There are c available information pieces, that each agent may or may not know initially. Information pieces known at the start of an episode are referred to as first-hand information. Each turn, agents may move through the grid to one of its four neighboring cells, and may speak exactly one of their currently known information pieces. More precisely, the action space of agent j is defined as follows:
Aj = {left, right, up, down, no movement} × {1, . . . , c} (1)
When an agent utters an information piece, it is heard by every agent in its hearing range (an h×h grid centered in each agent, with h < 2w−1). The agents who heard the utterance will be able
to share this newly-learned information with others in following turns. We refer to this as secondhand information, since it is learned –as opposed to first-hand information, given at the start of each episode. The state space is comprised of the position of the agents and their current knowledge:
S = {{(pi, ki), for i ∈ {1, . . . , a}} where pi ∈ {1, . . . , w} × {1, . . . , w}, and ki ∈ {0, 1}c}
Each agent aims to maximize their individual reward Ri via information seeking and sharing. Rewards are earned by hearing a new piece of information, giving someone else a new piece of information, or correctly using recharge bases. Recharge bases are special cells where agents can reset their knowledge in exchange for a large reward (e.g. (n− 1)c times the reward for listening to or sharing new information). Each agent has its own stationary recharge base during an episode. To trigger a base, an agent must step into its designated base having acquired all the available pieces of information, causing the agent to lose all the second-hand information it learned. Recharge bases guarantee that there is always reward to seek in this environment. Concretely, if s = {(pi, ki), for i ∈ {1, . . . , a}} and ai = (adiri , acommi ), we can define the reward as the addition of the reward for hearing new information, giving new information, and using the recharge base:
Ri(s, ai) = ∑ i6=j 1{||pi − pj ||∞ ≤ h and ki,acommj = 0}+ ∑ i 6=j 1{||pi − pj ||∞ ≤ h and kj,acommi = 0}
+ (n− 1) · c · 1{pi = basei and kj = {1, 1, . . . , 1}}
A non-ToM agent can have only limited success in this environment. Without reasoning about its own knowledge (i.e. without zeroth order ToM), it does not know when to use a recharge base. Moreover, without knowledge about other agent’s knowledge (i.e. without first order ToM) it is not possibly to know which agents possess the information pieces it is lacking. Even if it accidentally hears information, a non-first-order ToM agent cannot efficiently decide what to utter in response to maximize its reward. Higher order ToM is also often needed in SymmToM, as we will discuss further in Section 7.3.
Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. SymmToM satisfies the desiderata we laid out in the previous section, as we will detail below:
Symmetric action space. As defined in Eq. 1,Ai = Aj for all i, j. Only a subset may be available at a time since agents cannot step outside the grid, speak a piece they have not heard, or move if they would collide with another agent in the same cell, but they all share the same action space.
Imperfect information. Messages sent by agents outside of the hearing range will not be heard. For example, in Fig. 2a green sends a message but it is not heard by anyone, since it is outside of red’s and blue’s range. Hearing ranges are guaranteed not to cover the whole grid, since h < 2w−1.
Observation of others. Agents have perfect vision of the grid, even if they cannot hear what was said outside of their hearing range. Hence, an agent may see that two agents were in range of each
other, and thus probably interacted, but not hear what was communicated. An example of this can be seen in Fig. 2a, where green observes blue and red interacting without hearing what was uttered.
The uncertainty in the observation also differentiates SymmToM from prior work: to solve the task perfectly, an agent needs to assess the probability that other agents outside its hearing range shared a specific piece of information to avoid repetition. This estimation may be performed using the knowledge of what each agent knows (first order ToM), the perceived knowledge of each of the agents in the interaction (second order ToM), as well as higher order ToM.
Information-seeking behavior Rewards are explicitly given for hearing and sharing novel information, guaranteeing information-seeking is crucial in SymmToM. Recharge bases ensure that the optimal solution is not for all agents to accumulate in the same spot and quickly share all the information available; and that the information tracking required is more complex than accumulating past events. Conceptually, with recharge bases we introduce an explicit and observable forgetting mechanism. As discussed in Section 3, this allows for perpetual information seeking and requires information staleness estimation. An example of successful recharge base use is shown in Fig. 2b.
5 BASELINE LEARNING ALGORITHMS AND OTHER BOUNDS
To learn a policy for acting in the multi-agent SymmToM environment, it is a good strategy to use a multi-agent reinforcement learning algorithm. We use MADDPG (Lowe et al., 2017), a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings. In MADDPG, each actor policy receives its observation space as input, and outputs the probability of taking each action.
Notably, actors in MADDPG have no mechanism for remembering past turns. This is a critical issue in SymmToM, as agents cannot remember which pieces they currently know, which ones they shared and to whom, and other witnessed interactions. To mitigate this, it is necessary to add a recurrence mechanism to carry over information from past turns. One option would be to modify the agent policy using a recurrent network like an LSTM, as RMADDPG (Wang et al., 2020) does.
Perfect Information, Heuristic and Lower Bound Models Performance is difficult to interpret without simpler baselines. As a lower bound model we use the original MADDPG, that since it does not have recurrence embedded, should perform worse or equal to any of the modifications described above. We also include an oracle model (MADDPG-Oracle), that does not require theory of mind since it receives the current knowledgeK for all agents in its observation space. The performance of MADDPG-Oracle may not always be achieved, as there could be unobserved communication with multiple situations happening with equal probability. Moreover, as the number of agents and size of the grid increases, current reinforcement learning models may not be able to find an optimal spatial exploration policy; they may also not be capable of inferring the optimal piece of information to communicate in larger settings. In these cases, MADDPG-Oracle may not perform optimally, so we also include a baseline with heuristic agents to compare performance.
Heuristic agents will always move to the center of the board and communicate round-robin all the information pieces they know until they have all the available knowledge. Then, they will move efficiently to their recharge base and come back to the center of the grid, where the process restarts. We must mention that this heuristic is not necessarily the perfect policy, but it will serve as a baseline to note settings where current MARL models fail even with perfect information. Qualitatively, smaller settings have shown to approximately follow a policy like the heuristic just described.
6 DIRECT MODELING OF SYMMETRIC THEORY OF MIND
In contrast to RMADDPG (Wang et al., 2020), we specifically design algorithms for our environment to maximize performance. Intuitively, our model computes a matrix, K ∈ {0, 1}c×a, that reflects the information pieces known by each agent from the perspective of the agent being modeled: Kij reflects if the agent being modeled believes that agent j knows i. K is updated every turn and used as input of the following turn of the agent, obtaining the desired recurrent behavior. K is also concatenated to the usual observation space, to be processed by a two-layer ReLU MLP and
obtain the probability distributions for speech and movement, as in the original MADDPG. There are several ways to approximate K. It is important to note that each agent can only partially observe communication, and therefore it is impossible to perfectly compute K deterministically.
The current knowledge is comprised of first-hand information (the initial knowledge of every agent, F , publicly available) and second-hand information. Second-hand information may have been heard this turn (S, whose computation will be discussed below) or in previous turns (captured in the K received from the previous turn, noted K(t−1)). Additionally, knowledge may be forgotten when an agent steps on a base having all the information pieces. To express this, we precompute a vector B ∈ {0, 1}a that reflects whether each agent is currently on its base; and a vector E ∈ {0, 1}a that determines if an agent is entitled to use their recharge base: Ej = 1∑
i Kij=c for all j ∈ {1, . . . , a}.
We are then able to compute K as follows:
K (t) ij = (Fij or Sij = 1 or K (t−1) ij = 1) and not (Bij and Eij) (2)
F , K(t−1), and B are given as input, but we have not yet discussed the computation of the secondhand information S. S often cannot be deterministically computed, since our setting is partially observable. We will identify three behaviors and then compute S as the sum of the three:
S = S[0] + S[1] + S[2]
For simplicity, we will assume from now on that we are modeling agent k. S[0] will symbolize the implications of the information spoken by agent k: if agent k speaks a piece of information, they thus know that every agent in its hearing range must have heard it (first order ToM). S[1] will symbolize the implications of information heard by k: this includes updating k’s known information (zeroth order ToM) and the information of every agent that is also in hearing range of the speaker heard by k. S[2] will symbolize the estimation of information pieces communicated between agents that are out of k’s hearing range. Since we assume perfect vision, k will be able to see if two agents are in range of each other, but not hear what they communicate (if they do at all).
S[0] and S[1] can be deterministically computed. To do so, it is key to note that every actor knows the set of actions A ∈ {0, 1}c×a performed by each agent last turn, given that those actions were performed in their hearing range. Moreover, each agent knows which agents are in its range, as they all have perfect vision. We precompute H ∈ {0, 1}a×a to denote if two given agents are in range.
Then, S[0]ij = 1 if and only if information piece i was said by k, and agents k and j are in hearing range of each other. More formally,
S [0] ij = Aik ·Hkj
S [1] ij = 1 if and only if agent k (the actor we are modeling) heard some agent ` speaking information piece i, and agent j is also in range of agent `. Note that agent k does not need to be in hearing range of agent j. More precisely,
S [1] ij = Ai` ·Hk` ·H`j , for any agent `
S[2] –the interactions between agents not in hearing range of the agent we are modeling– can be estimated in different ways. A conservative approach would be to not estimate interactions we do not witness (S[2] = 0, which we will call MADDPG-ConservativeEncounter (MADDPG-CE)); and another approach would be to assume that every interaction we do not witness results in sharing a piece of information that will maximize the rewards in that immediate turn. We will call this last approach MADDPG-GreedyEncounter (MADDPG-GE). MADDPG-GE assumes agents play optimally, but does not necessarily know all the known information and that could lead to a wrong prediction. This is particularly true during training, as agents may not behave optimally. The computation of S[2] for MADDPG-GE is as follows.
First, we predict the information piece U` that agent ` uttered. MADDPG-GE predicts U` will be the piece that the least number of agents in range know, as it will maximize immediate reward:
U` = arg min i ∑ j (Kij and Hj`) ∈ {1, . . . c}
With this prediction, agent j will know information i if at least one agent in its range said it:
S [2] ij = 1 if exists ` 6= k such that U` = j and Hj` and j 6= k else 0
MADDPG-EstimatedEncounter (MADDPG-EE) MADDPG-CE and MADDPG-GE are two paths to information sharing estimation, but in none of them do we estimate the probability of an agent knowing a specific piece of information. In MADDPG-EstimatedEncounter (MADDPG-EE), known information of other agents is not binary, i.e. Kij ∈ [0, 1]. This added flexibility can avoid making predictions of shared information based upon unreliable information.
MADDPG-EE estimates the probability that an agent j uttered each piece of information (Uj ∈ Rc) by providing the current information of all agents in its range to an MLP:
Uj = softmax(f(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f an MLP
Then, the probability of having heard a specific piece of information will be the complement of not having heard it, which in turn means that none of the agents in range said it. More formally,
S [2] ij = 1− ∏ `,Hj`=1 1− U`
Since MADDPG-EE requires functions to be differentiable, we use a differential approximation of Eq. 2. A pseudocode of MADDPG-EE’s implementation can be found in Section A.4. MADDPGEE solely focuses on first order ToM, and we leave to future work modeling with second order ToM. The structure of the model would be similar but with an order of magnitude more parameters.
7 EXPERIMENTS
7.1 EXPERIMENTAL SETTINGS
In this section, we compare the different algorithms explained in the previous section. The observation space will be constituted of a processed version of the last turn in the episode, to keep the input size controlled. More precisely, the observation space is composed of: the position of all agents, all recharge bases, the current direction each agent is moving towards to and what they communicated in the last turn, the presence of a wall in each of the immediate surroundings, and every agents’ first-hand information. First-hand information is publicly available in our experiments to moderate the difficulty of the setup, but this constraint could also be removed. This simple setting is still partially observable, since the agents cannot hear interactions outside of their hearing range.
We use the reward as our main evaluation metric. This metric indirectly evaluates ToM capabilities, since information-seeking is at the core of SymmToM. We train through 60000 episodes, and with 7 random seeds to account for high variances in the rewards obtained. Our policies are parametrized by a two-layer ReLU MLP with 64 units per layer, as in the original MADDPG (Lowe et al., 2017). MADDPG-EE’s function f is also a two-layer ReLU MLP with 64 units per layer.
We test two board sizes (w = 6, and w = 12), two numbers of agents (a = 3 and a = 4), and three quantities of information pieces (c = a, c = 2a c = 3a). The length of each episode is set to 5w. More detail about design decisions can be found in Section A.5.
7.2 MAIN RESULTS
As we can observe in Table 1, there is a significant difference in performance between MADDPGOracle and MADDPG (MADDPG-Oracle is 123% better on average): this confirms that developing ToM and recurrence is vital to perform successfully in SymmToM. MADDPG-Oracle is often not an upper bound: when c > a, the heuristic performs better (101% on average). This shows that even with perfect information, it can be difficult to learn the optimal policy using MADDPG. Moreover, models with a recurrence mechanism perform significantly better than MADDPG (61% better on average), also showing that remembering past information gives a notable advantage. As expected, having recurrent models tailored to our problem resulted in better performance than a general LSTM recurrence (RMADDPG). The performance of the best of the tailored models (MADDPGCE, MADDPG-GE, MADDPG-EE) was 42% better on average than plain RMADDPG. LSTM was able to surpass the best of the tailored models only for a = 3, w = 12, c = 3a.
Increasing c generally decreases global rewards for learned agents (on average, rewards for c = 2a are 75.54% of those for c = a, and rewards for c = 3a are 75.66% of those for c = a).
This suggests that probabilistic decisions are harder to learn, or impossible to successfully navigate when several events are equally likely. MADDPG-EE did not show improvements over the other learned agents, and in some cases decreased its performance more heavily than other learned agents (e.g. w = 6, c = 3a). MADDPG-EE uses an MLP in its definition of S[2], which gives more flexibility but also implies a more complex function to learn. We leave to future work to explore other probabilistic agents, but the significant difference in performance between all of the learned models and the highest performing ones (MADDPG-Oracle and the heuristic) shows there is ample space for improvement in this task, and hence proves SymmToM to be a simple yet unsolved benchmark.
Increasing a results in a 10% reduction of performance on average for learned models. Nonetheless, the heuristic improved its rewards by an average of 46%, given the larger opportunities for rewards when including an additional listener. Overall, this implies that increasing a also makes the setup significantly more difficult. Finally, increasing w did not have a conclusive result: for a = 4 it consistently decreased performance in ∼17%, but for a = 3 we saw an improvement of 16% and 61% for c = a and c = 2a respectively.
In sum, modifying c and a provides an easy way of making a setting more difficult without introducing additional rules.
7.3 DISCUSSION
In the past section we analyzed results using the metric of episode rewards. Although the rewards in SymmToM are designed to correlate with information seeking and knowledge state of the agent themselves and others, they do not show explicitly if agents are exhibiting theory of mind. To do so more directly, we develop two categories of possible analyses: scenarios specifically designed to test theory of mind, and post-hoc analyses of episodes.
A classic example of a scenario specifically designed to test ToM behavior is the Sally-Anne task (Wimmer & Perner, 1983). This false belief task, originally designed for children, aims to test if a passive observer can answer questions about the beliefs of another person, in situations where that belief may not match reality. If we were to use it for machine ToM, we could repeat the experiment and ask an agent to predict the position of an object varying the underlying conditions. This test is feasible because there is only one agent with freedom of action, which ensures that desired conditions are met every time. Testing becomes unfeasible when giving multiple agents freedom of action, as constraints planned in test design may be broken by a collective drift from the strategy thought by the designer. Testing becomes easier if we allow for controlling all agents but one, as shown in Fig. 3. Other tests besides the ones shown may be designed. In particular, in Fig. 3d we show an example of probabilistic ToM where two communicative events are equally likely, but one could modify this scenario to have different probabilities and test the expected value of the turns until red successfully shares an information piece. One could also design retroactive deduction tests: for example, in Fig. 3d if red communicates and receives no reward, it can deduce that green had
received that information from blue. If there had been another agent (let’s say, a yellow agent) in range of blue when it spoke to green, the red agent could also update its knowledge about yellow. Results for the tests proposed in Figure 3 are detailed in Appendix A.1.
Post-hoc analysis also has its challenges in multi-agent settings, even in the most direct cases. Thanks to our reward shaping, using recharge bases is always the optimal move when an agent has all the information available: an agent will have a reward of (n−1)c for using the base, whereas it can only gain up to n− 1 + c− 1 per turn if it decides not to use it. Even in this case, small delays in using the base may occur, for example if the agent can gather additional rewards on its path to the base. More generally, having multiple agents makes a specific behaviors attributable to any of the several events happening at once, or a combination of them.
Even though it may be difficult to establish causality when observing single episodes, we developed metrics that comparatively show which models are using specific features of the environment better than others. Examples of metrics are unsuccessful recharge base usage count; number of times an agent shares an information piece everyone in its range already knows when having better alternatives; number of times an agent moves away from every agent when not having all the information pieces available; among others. See Appendix A.2 for detailed description and results. Reward can also be understood as a post-hoc metric with a more indirect intepretation.
Post-hoc analyses of single episodes can also be blurred by emergent communication. Because agents were trained together, they may develop special meaning assignment to particular physical movements or messages. Even though qualitatively this does not seem to be the case for the models presented in the paper, tests should also account for future developments. This also implies that one should not overinterpret small differences in the metrics described in the paragraph above.
8 CONCLUSIONS AND FUTURE WORK
We defined a framework to analyze machine theory of mind in a multi-agent symmetric setting, a more realistic setup than the tasks currently used in the community. Based on the four properties needed for symmetric theory of mind to arise, we provided a simplified setup on which to test the problem, and we showed we can easily increase difficulty by growing the number of agents or communication pieces. Our main goal in this work was not to solve symmetric theory of mind, but rather to give a starting point to explore more complex models in this area. We showed that even with this minimal set of rules, SymmToM proves algorithmically difficult for current multi-agent deep reinforcement learning models, even when tailoring them to our specific task. We leave to
future work to develop models that handle second-order theory of mind and beyond, and models that periodically reevaluate past turns to make new deductions with information gained a posteriori (i.e., models that pass retroactive deduction tests). Another interesting direction would be to replace the information pieces with constrained natural language: communication sharing in our task is binary, whereas in language there is flexibility to communicate different subsets of a knowledge base using a single sentence. We will make our codebase public upon publication, that also includes additional observation space restrictions to increase difficulty.
ETHICS AND REPRODUCIBILITY STATEMENT
Theory of mind research at its core deals with understanding the mental states of other individuals. In the present work we focused on collaborative machine theory of mind, which entails interactions only between artificial agents, and only in scenarios where every party involved has the same incentive structure. This design decision is intentional. Other approaches to theory of mind research could include scenarios with human-agent interaction, which could potentially lead to agents learning to model human players’ mental states. This is not concerning per se, but in scenarios where players do not have the same incentive structure, it could lead to agents learning to deceive other players (potentially human players). The state of the art in machine theory of mind is still far away from these capabilities, but we believe that experiment design choices should always take this matter into account.
Regarding reproducibility, we will make all code public upon acceptance, including the environment and the models’ code. The exact set of parameters used for training will also be shared. Multi-agent reinforcement learning models do have high variance, so models should be run several times to see similar confidence intervals as the ones shown in Fig. 5 and Fig. 6.
A APPENDIX
A.1 AD-HOC THEORY OF MIND TESTS
We test on the four examples shown in Figure 3, adapting the examples to fit one of the grid sizes we already experimented on. For the tests described in Figure 3a and Figure 3b, we test two different grid sizes: w = 6 and w = 12. For the tests described in Figure 3c and Figure 3d we only test w = 12 and w = 6 respectively. Image depictions of the exact test configurations can be seen in Figure 4.
We measure three metrics: average success rate, average failure rate, and ratio of average turns to succeed vs. optimum (RATSO). Note that Average Success Rate and Average Failure Rate do not necessarily sum 1 since these two metrics only include trials where the agent reached any of the two proposed outcomes. If, for example, the agent never moved from the starting point, the trial would not be counted positively towards Avg. Success Rate or Avg. Failure Rate. In addition, ratio of average turns to succeed vs. optimum (RATSO) is the ratio between the average turns it took to succeed in successful trials, and the optimum number of turns to succeed in a specific trial.
For the tests in Figure 4a, 4b, 4d, and 4e, the trial ends when the red agent reaches the hearing range of one of the two possible target agents. The test depicted in Figure 4f is a pass/fail test: if red moves suboptimally at any point before meeting blue, the trial is declared as failed. This makes it a particularly difficult test to pass at random. Because of the nature of this second order ToM test, we only report the average success rate. Finally, for the probabilistic ToM test we want to measure how fast can red communicate all the information it has to green. The optimal number of turns is 1.5 (as discribed in Figure 3), and because of the nature of this test we will only report RATSO.
All results can be found in Table 2. As expected, a larger average success rate correlates with higher reward models (MADDPG-CE and MADDPG-GE are the best models), suggesting that the
reward is a valuable overall metric. The low average success rates across all tests show there is significant room to improve in this benchmark. Success rate drops sharply when increasing the grid size, suggesting larger grids impose more difficult training settings. In this analysis, we used models that were trained specifically for each parameter combination.
Results for Oracle were omitted since some tests assumed no knowledge about communication (e.g. even though agents in Figure 4b do not communicate during the test, the test was designed to test if the red agent assumed them to be).
As we emphasized in the main text, many more tests can be proposed. The code base we will release allows for easily adding new tests to the suite.
A.2 POST-HOC ANALYSES
We developed several post hoc metrics to analyze specific aspects of our models.
• Unsuccessful recharge base usage rate: Average times per episode an agent steps on its recharge base without having all the information available (i.e. wrong usage of the recharge base). Note that an agent may step on its base just because it is on the shortest path to another cell. Therefore, a perfect theory of mind agent will likely not have zero on this score; but generally, lower is better. See results in Table 3.
• Wrong communication piece selection count: Average times per episode an agent attempted to say an information they currently do not possess. In these cases, no communication happens. Lower is better. See results in Table 4.
• Useless communication piece selection count: Average times per episode an agent communicated an information piece that everyone in its hearing range already knew, when having a piece of information that at least one agent in its range did not know. Lower is better. See results in Table 5.
• Useless movement: Average times per episode an agent moves away from every agent that does not have the exact same information it has, given that the agent does not currently possess all the information available. This means that the agent is moving away from any possible valuable interaction. Lower is better. See results in Table 6.
All metrics are normalized by number of agents (i.e., they show the score for a single agent). This allows for better comparison between a = 3 and a = 4 settings.
RMADDPG had the worst scores for unsuccessful recharge base use rate and useless communication piece selection count. RMADDPG scored 43% more than Oracle for unsuccessful base usage on average, and 63% more than Oracle on average for usage of a useless communication piece. The best tailored models (MADDPG-CE and MADDPG-GE) performed similarly to Oracle on average for these two metrics. In contrast, MADDPG-CE and MADDPG-GE performed significantly worse than Oracle for the wrong communication piece selection count (48% and 56% more than Oracle on average). This suggests that all models may be making wrong decisions, but RMADDPG is biased towards communicating redundant information whereas MADDPG-CE and MADDPG-EE tend towards not communicating at all (the true effect of trying to communicate something they are not allowed). Further analysis is needed to truly understand if these apparently wrong behaviors were done in turns where the agent had all the information available to make a better move, or if this is their default when they believe they have nothing of value to communicate. A priori RMADDPG bias seems more principled, but it still showed worse performance overall.
No learned model performed particularly better in the useless movement metric (average differences in performance were less than 10%), suggesting that they perform pointless movements in similar frequencies. It is important not to overinterpret small differences in these metrics. For example, a useless movement may be a signal of emergent communication. Furthermore, an agent may communicate something suboptimal for its immediate reward but this move may not affect its expected reward for the trial.
A.3 TRAINING CURVES FOR THREE RANDOM SEEDS COMBINED
A.4 PSEUDOCODE OF MADDPG-EE
Algorithm 1 Actor implementation of MADDPG-EE, approximating K to make it differentiable. Input: observation, A ∈ {0, 1}c×a, agent idx ∈ 0, . . . a− 1, F ∈ {0, 1}c×a, K ∈ {0, 1}c×a, B ∈ {0, 1}a, H ∈ {0, 1}a×a
Make agents not be in their own hearing range, to avoid talking to themselves from the previous turn. This would be problematic when using recharge bases.
H = H − 1a×a
Compute S[0], all the heard information spoken by agent idx: S[0] = copy
a times A[:, agent idx] copy c times H[agent idx, :]
Compute S[1], all heard information by agent idx, spoken by all agents: S[1] = (A copy
C times H[agent idx, :]) ·H
Compute S[2], an estimation of information pieces communicated between agents that were out of agent idx’s hearing range:
Uj = softmax(f1(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f1 an MLP S [2] ij = 1− ∏ `,Hj`=1 1− U` for all i ∈ {0, . . . , c− 1} and j ∈ {0, . . . , a− 1}
S = S[0] + S[1] + S[2]
Ei = 1sum(K[:,i])=c ∈ {0, 1}a for all i ∈ {0, . . . , c− 1}
K = step ( F · 100 +K + S − 2 · copy
c times (B E) ) return softmax(f2([observation K])), K where f2 is an MLP
A.5 EXPERIMENT DESIGN DECISIONS AND CONSIDERATIONS ABOUT RESULT PRESENTATION
All our experiments are with h = 1: only the immediate neighbors of an agent will hear what they communicate.
We tested with a = 3 and a = 4 and not larger numbers of agents, as the training time increases quadratically with a; also, the intrinsic difficulty of larger setup –even with perfect information– would possibly degrade performance to the point of making it impossible to compare models.
Running experiments with the same number of turns for every setting would imply that agents can move less in combinations with larger values of w, hence the need of making it proportional to the size of the grid. Since the duration of the experiment is directly proportional to the length of the episodes, we settled on a small multiplier. 5w allows agents to move to each edge of the grid and back to the center. A similar decision is required when choosing c: having a constant number of information pieces when increasing the number of agents would make the problem easier, as each agent would have fewer options of first-hand information pieces.
We decided to evaluate running additional episodes over the best checkpoints of each model because there was high variance for some runs, and drops in performance after achieving the highest rewards. Those results are the base of the discussion and can be seen in Table 1. Still, we share the training curves so that the reader can observe these behaviors in Fig. 5 and Fig. 6.
We used the same hyperparameters as the ones used in MADDPG, except with a reduced learning rate and tau (lr = 0.001 and τ = 0.005). We used the same parameters for all our experiments. | 1. What is the focus and contribution of the paper on multi-agent environments and theory of mind capabilities?
2. What are the strengths of the proposed environment and architecture for achieving improvement over agents without theory of mind capabilities?
3. Do you have any concerns regarding the limitation of the proposed approach in capturing the full scope of theory of mind as originally defined in psychology?
4. How can the paper improve its reporting of results by addressing issues related to varied outcomes in multi-agent settings?
5. What are your thoughts on the discussion of potential tests for ToM in section 7.3, and how could the authors apply these tests to obtain better statistical data on the performance of trained agents?
6. Why did the authors choose MADDPG and RMADDPG for their approach? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes a multi-agent environment to develop and test theory of mind capabilities for agents. The environment is one where all agents speak, listen to and move around. Agents share information and the goal for each agent is to get all the information and then release. The paper suggests some architecture that avchieve some improvement over agents without theory of mind capabilties. Ultimately though, the task is unsolved and posed as a research question
Review
The paper introduces the concept of theory of mind well and does a good job at motivating the papers direction.
The proposed environment is well motivate dwith respect to the target of theory of mind. Although, I do think it's important to make sure to qualify the term theory of mind a bit more - as the original concept in psychology does include additional knowledge such as beliefs, desires, intentions and emotions. Most of which are not captured in this paper at all. I'd also really suggest to stay away from misleading terms like "awareness". E.g. I have trouble with the definition or model non-ToM. Theory of mind is about other agents not oneself. It's also not true that the agent is not "aware". It's just that he doesn't have the knowledge in the first place.
Multi-agent settings typically suffer strongly from varied outcomes. It would be important to have more random seeds and also report measures of dispersion of run outcomes. Otherwise, I am not sure these results actually hold true in rigorous statistical testing.
The discussion of potential tests for ToM in 7.3 is interesting. It's not clear though why you don't just apply some of these, in order, to get good statistical data about the performance of trained agents besides accumulated reward.
Why did you choose MADDPG and RMADDPG? |
ICLR | Title
Symmetric Machine Theory of Mind
Abstract
Theory of mind (ToM), the ability to understand others’ thoughts and desires, is a cornerstone of human intelligence. Because of this, a number of previous works have attempted to measure the ability of machines to develop a theory of mind, with one agent attempting to understand anothers’ internal “mental state”. However, ToM agents are often tested as passive observers or in tasks with specific predefined roles, such as speaker-listener scenarios. In this work, we propose to model machine theory of mind in a more flexible and symmetric scenario; a multiagent environment SymmToM where all agents can speak, listen, see other agents, and move freely through a grid world. An effective strategy to solve SymmToM requires developing theory of mind to maximize each agent’s rewards. We show that multi-agent deep reinforcement learning models that model the mental states of other agents achieve significant performance improvements over agents with no such ToM model. At the same time, our best agents fail to achieve performance comparable to agents with access to the gold-standard mental state of other agents, demonstrating that the modeling of theory of mind in multi-agent scenarios is very much an open challenge.
1 INTRODUCTION
Human communication is shaped by the desire to efficiently cooperate and achieve communicative goals (Tomasello, 2009). Children learn from a young age that the others they interact with have independent mental states, and therefore communicating is necessary to obtain information from or shape the intentions of those they interact with. Remembering and reasoning over others’ mental states ensures efficient communication by avoiding having to repeat information, and in cases where cooperation is involved contributes to achieving a common goal with minimal effort.
Because of this, there is growing interest in developing agents that can exhibit this kind of behavior, referred to as Theory of Mind (ToM) by developmental psychologists (Premack & Woodruff, 1978).1 Previous work on agents imbued with some capability of ToM has focused mainly on two types of tasks. The former are tasks where the agent is a passive observer of a scene that has to predict the future by reasoning over others’ mental states. These tasks may involve natural language (Nematzadeh et al., 2018) or be purely spatial (Gandhi et al., 2021; Rabinowitz et al., 2018; Baker et al., 2011). The latter are tasks where the ToM agent has a specific role, such as “the speaker” in speaker-listener scenarios (Zhu et al., 2021).
In contrast, human cooperation and communication is very often multi-party, and rarely assumes that people have pre-fixed roles. Moreover, human interlocutors are seldom passive observers of a scene but instead proactively interact with their environment. Since previous domains limited us to research questions where most parties involved did not have an active role, we developed a more flexible environment where we can now study what happens when all participants must act as both speaker and listener. In this paper, we present SymmToM, a fully symmetric multi-agent environment where all agents can see, hear, speak, and move, and are active players of a simple information-gathering game. To solve SymmToM, agents need to exhibit different levels of ToM, as well as efficiently communicate through a simple channel with a fixed set of symbols.
1In the present work we focus solely on reasoning over mental states. Other aspects of ToM include understanding preferences, goals, and desires of others. Multi-agent scenarios for inferring agent’s goals have been studied (Ullman et al., 2009), and passive-observer benchmarks (Gandhi et al., 2021; Shu et al., 2021; Netanyahu* et al., 2021) have been proposed for evaluating understanding of agent’s goals and preferences.
SymmToM is a partially observable setting for all agents: even when agents have full vision, hearing may be limited. This also differentiates SymmToM from prior work, as modeling may require probabilistic theory of mind. In other words, agents need to not only remember and infer other agents’ knowledge based on what they saw, but also estimate the probability that certain events happened. This estimation may be performed by assuming other agents’ optimal behavior and processing the partial information available. Despite its simplicity, SymmToM fulfills the properties required for symmetric ToM to arise, which will be discussed in the following section.
We find that SymmToM cannot be completely solved neither by using well-known multi-agent deep reinforcement learning (RL) models, nor by tailoring those models to our task. We show that even maintaining the simple rules of the environment, modifying its parameters results in much more difficult challenges, even for models where we artificially introduce perfect information. We discuss examples where different levels of theory of mind are required to solve the task, and possible metrics.
2 THEORY-OF-MIND AGENTS
A belief Theory-of-Mind agent can be defined as a modification of the standard multi-agent RL paradigm, where the agents’ policies are conditioned on their beliefs about others. Formally, we define a reinforcement learning problem M as a tuple of a state space S, action space A, state transition probability function T ∈ S × A → R, and reward function R ∈ S × A → R, i.e.M := 〈S,A, T,R〉. In this setting, an agent learns a (possibly probabilistic) policy π : S → A that maps from states to actions, with the goal of maximizing reward.
In a multi-agent RL setting each agent can potentially have its own state space, action space, transition probabilities, and reward function, so we can define an instance ofMi = 〈Si,Ai, Ti, Ri〉 for each agent i. For convenience, we can also define a joint state space S = ⋃ i Si that describes the entire world in which all agents are interacting. Importantly, in this setting each agent will have its own view of the entirety of the world, described by a conditional observation function ωi : S → Ωi that maps from the state of the entire environment to only the information observable by agent i.
As elaborated above, ToM is the ability to know (and act upon) the knowledge that an agent has. Agents with no ToM will follow a policy that depends only on their current (potentially partial or noisy) observation of their environment: πi(ai,t | ωi(st)). Agents with zeroth order ToM can reason over their own knowledge. These agents will be stateful, πi(· | ωi(st), h(i)t ), where h (i) t is i’s hidden state. Hidden states are always accessible to their owner, i.e. i has access to h(i)t .
Agents with capabilities of reasoning over other agents’ mental states will need to estimate h(j)t for j 6= i. We will denote the estimation that i does of j’s mental state in time t as ĥ(i,j)t :
πi(· | ωi(st), h(i)t , ĥ (i,1) t , . . . ĥ (i,i−1) t , ĥ (i,i+1) t . . . ĥ (n) t )
How do we estimate ĥ(i,j)t ? As a function of i’s (the predicting agent) previous hidden state t−1, i’s observation in t−1, and i’s prediction of the hidden states of every agent in the previous turn:
ĥ (i+1) t = f(h (i) t−1, ωi(st−1), ĥ (i,1) t−1 , . . . ĥ (i,i−1) t−1 , ĥ (i,i+1) t−1 . . . ĥ (i,n) t−1 )
i’s prediction of other agents’ observation in t−1 is also crucial, but not explicitly mentioned since it can be computed using ωi(st−1). For the initial turn, ĥ (i,j) 0 may be initialized differently depending on the problem: if initial knowledge is public, ĥ(i,j)0 is trivial; if not, ĥ (i,j) 0 may be estimated.
3 SYMMETRIC THEORY-OF-MIND
We define symmetric theory of mind environments as settings where theory of mind is required to perform a task successfully, and all agents have the same abilities. There are at least four defining characteristics of symmetric ToM to arise:
Symmetric action space. In symmetric ToM all agents are required to have the same action space (in contrast to, for example, ToM tasks in speaker-listener settings). Concretely,Ai = Aj 6= ∅ ∀i, j.
Imperfect information. In perfect information scenarios all knowledge is public, making it impossible to have agents with different mental states. In ToM tasks in general, there could be a subset of agents with perfect information: one example would a passive observer that needs to predict future behavior. In symmetric ToM, since all agents have the same abilities and roles, all agents must have imperfect information. More precisely, ωi must not be the identity for any agent i.
Observation of others. Agents must have at least partial information of another agent to estimate its mental state. In contrast to passive-observer settings, in symmetric ToM every agent must be able to partially observe all others. More precisely, ωi must observe at least partial information about s (j) t (the subset of st that refers to agent j), although we do not require s (j) t 6= ∅ in every single turn. Moreover, if communication is allowed, it is desirable to partially observe or infer interactions between two or more agents to develop second order ToM (i.e. predicting what an agent thinks about what another agent is thinking) or higher.
Information-seeking behavior. It should be relevant for successfully performing the task to gather as much information as possible, and this information-gathering should involve some level of reasoning over other agent’s knowledge. This is true for first-order ToM tasks in general, and could be formalized as π∗ 6= π for any zeroth-order ToM policy πi(· | ωi(st), h(i)t ). Furthermore, it would be desirable to design a task with perpetual information seeking behavior, since it would ensure that all agents have an incentive to play efficiently even in long episodes. If one wants to design a task with perpetual information-seeking and finite knowledge, information must be forgotten eventually. A forgetting mechanism could be implemented as an explicit loss of knowledge under specific conditions, or make remembrances less reliable or noisy. Moreover, this introduces the concept of information staleness. Since information is not cumulative and the environment is only partially observable, agents will need to estimate whether what they knew to be true still holds in the present.
4 THE SYMMTOM ENVIRONMENT
SymmToM is an environment where a agents are placed in a w×w grid world, and attempt to maximize their reward by gathering all the information available in the environment. There are c available information pieces, that each agent may or may not know initially. Information pieces known at the start of an episode are referred to as first-hand information. Each turn, agents may move through the grid to one of its four neighboring cells, and may speak exactly one of their currently known information pieces. More precisely, the action space of agent j is defined as follows:
Aj = {left, right, up, down, no movement} × {1, . . . , c} (1)
When an agent utters an information piece, it is heard by every agent in its hearing range (an h×h grid centered in each agent, with h < 2w−1). The agents who heard the utterance will be able
to share this newly-learned information with others in following turns. We refer to this as secondhand information, since it is learned –as opposed to first-hand information, given at the start of each episode. The state space is comprised of the position of the agents and their current knowledge:
S = {{(pi, ki), for i ∈ {1, . . . , a}} where pi ∈ {1, . . . , w} × {1, . . . , w}, and ki ∈ {0, 1}c}
Each agent aims to maximize their individual reward Ri via information seeking and sharing. Rewards are earned by hearing a new piece of information, giving someone else a new piece of information, or correctly using recharge bases. Recharge bases are special cells where agents can reset their knowledge in exchange for a large reward (e.g. (n− 1)c times the reward for listening to or sharing new information). Each agent has its own stationary recharge base during an episode. To trigger a base, an agent must step into its designated base having acquired all the available pieces of information, causing the agent to lose all the second-hand information it learned. Recharge bases guarantee that there is always reward to seek in this environment. Concretely, if s = {(pi, ki), for i ∈ {1, . . . , a}} and ai = (adiri , acommi ), we can define the reward as the addition of the reward for hearing new information, giving new information, and using the recharge base:
Ri(s, ai) = ∑ i6=j 1{||pi − pj ||∞ ≤ h and ki,acommj = 0}+ ∑ i 6=j 1{||pi − pj ||∞ ≤ h and kj,acommi = 0}
+ (n− 1) · c · 1{pi = basei and kj = {1, 1, . . . , 1}}
A non-ToM agent can have only limited success in this environment. Without reasoning about its own knowledge (i.e. without zeroth order ToM), it does not know when to use a recharge base. Moreover, without knowledge about other agent’s knowledge (i.e. without first order ToM) it is not possibly to know which agents possess the information pieces it is lacking. Even if it accidentally hears information, a non-first-order ToM agent cannot efficiently decide what to utter in response to maximize its reward. Higher order ToM is also often needed in SymmToM, as we will discuss further in Section 7.3.
Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. SymmToM satisfies the desiderata we laid out in the previous section, as we will detail below:
Symmetric action space. As defined in Eq. 1,Ai = Aj for all i, j. Only a subset may be available at a time since agents cannot step outside the grid, speak a piece they have not heard, or move if they would collide with another agent in the same cell, but they all share the same action space.
Imperfect information. Messages sent by agents outside of the hearing range will not be heard. For example, in Fig. 2a green sends a message but it is not heard by anyone, since it is outside of red’s and blue’s range. Hearing ranges are guaranteed not to cover the whole grid, since h < 2w−1.
Observation of others. Agents have perfect vision of the grid, even if they cannot hear what was said outside of their hearing range. Hence, an agent may see that two agents were in range of each
other, and thus probably interacted, but not hear what was communicated. An example of this can be seen in Fig. 2a, where green observes blue and red interacting without hearing what was uttered.
The uncertainty in the observation also differentiates SymmToM from prior work: to solve the task perfectly, an agent needs to assess the probability that other agents outside its hearing range shared a specific piece of information to avoid repetition. This estimation may be performed using the knowledge of what each agent knows (first order ToM), the perceived knowledge of each of the agents in the interaction (second order ToM), as well as higher order ToM.
Information-seeking behavior Rewards are explicitly given for hearing and sharing novel information, guaranteeing information-seeking is crucial in SymmToM. Recharge bases ensure that the optimal solution is not for all agents to accumulate in the same spot and quickly share all the information available; and that the information tracking required is more complex than accumulating past events. Conceptually, with recharge bases we introduce an explicit and observable forgetting mechanism. As discussed in Section 3, this allows for perpetual information seeking and requires information staleness estimation. An example of successful recharge base use is shown in Fig. 2b.
5 BASELINE LEARNING ALGORITHMS AND OTHER BOUNDS
To learn a policy for acting in the multi-agent SymmToM environment, it is a good strategy to use a multi-agent reinforcement learning algorithm. We use MADDPG (Lowe et al., 2017), a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings. In MADDPG, each actor policy receives its observation space as input, and outputs the probability of taking each action.
Notably, actors in MADDPG have no mechanism for remembering past turns. This is a critical issue in SymmToM, as agents cannot remember which pieces they currently know, which ones they shared and to whom, and other witnessed interactions. To mitigate this, it is necessary to add a recurrence mechanism to carry over information from past turns. One option would be to modify the agent policy using a recurrent network like an LSTM, as RMADDPG (Wang et al., 2020) does.
Perfect Information, Heuristic and Lower Bound Models Performance is difficult to interpret without simpler baselines. As a lower bound model we use the original MADDPG, that since it does not have recurrence embedded, should perform worse or equal to any of the modifications described above. We also include an oracle model (MADDPG-Oracle), that does not require theory of mind since it receives the current knowledgeK for all agents in its observation space. The performance of MADDPG-Oracle may not always be achieved, as there could be unobserved communication with multiple situations happening with equal probability. Moreover, as the number of agents and size of the grid increases, current reinforcement learning models may not be able to find an optimal spatial exploration policy; they may also not be capable of inferring the optimal piece of information to communicate in larger settings. In these cases, MADDPG-Oracle may not perform optimally, so we also include a baseline with heuristic agents to compare performance.
Heuristic agents will always move to the center of the board and communicate round-robin all the information pieces they know until they have all the available knowledge. Then, they will move efficiently to their recharge base and come back to the center of the grid, where the process restarts. We must mention that this heuristic is not necessarily the perfect policy, but it will serve as a baseline to note settings where current MARL models fail even with perfect information. Qualitatively, smaller settings have shown to approximately follow a policy like the heuristic just described.
6 DIRECT MODELING OF SYMMETRIC THEORY OF MIND
In contrast to RMADDPG (Wang et al., 2020), we specifically design algorithms for our environment to maximize performance. Intuitively, our model computes a matrix, K ∈ {0, 1}c×a, that reflects the information pieces known by each agent from the perspective of the agent being modeled: Kij reflects if the agent being modeled believes that agent j knows i. K is updated every turn and used as input of the following turn of the agent, obtaining the desired recurrent behavior. K is also concatenated to the usual observation space, to be processed by a two-layer ReLU MLP and
obtain the probability distributions for speech and movement, as in the original MADDPG. There are several ways to approximate K. It is important to note that each agent can only partially observe communication, and therefore it is impossible to perfectly compute K deterministically.
The current knowledge is comprised of first-hand information (the initial knowledge of every agent, F , publicly available) and second-hand information. Second-hand information may have been heard this turn (S, whose computation will be discussed below) or in previous turns (captured in the K received from the previous turn, noted K(t−1)). Additionally, knowledge may be forgotten when an agent steps on a base having all the information pieces. To express this, we precompute a vector B ∈ {0, 1}a that reflects whether each agent is currently on its base; and a vector E ∈ {0, 1}a that determines if an agent is entitled to use their recharge base: Ej = 1∑
i Kij=c for all j ∈ {1, . . . , a}.
We are then able to compute K as follows:
K (t) ij = (Fij or Sij = 1 or K (t−1) ij = 1) and not (Bij and Eij) (2)
F , K(t−1), and B are given as input, but we have not yet discussed the computation of the secondhand information S. S often cannot be deterministically computed, since our setting is partially observable. We will identify three behaviors and then compute S as the sum of the three:
S = S[0] + S[1] + S[2]
For simplicity, we will assume from now on that we are modeling agent k. S[0] will symbolize the implications of the information spoken by agent k: if agent k speaks a piece of information, they thus know that every agent in its hearing range must have heard it (first order ToM). S[1] will symbolize the implications of information heard by k: this includes updating k’s known information (zeroth order ToM) and the information of every agent that is also in hearing range of the speaker heard by k. S[2] will symbolize the estimation of information pieces communicated between agents that are out of k’s hearing range. Since we assume perfect vision, k will be able to see if two agents are in range of each other, but not hear what they communicate (if they do at all).
S[0] and S[1] can be deterministically computed. To do so, it is key to note that every actor knows the set of actions A ∈ {0, 1}c×a performed by each agent last turn, given that those actions were performed in their hearing range. Moreover, each agent knows which agents are in its range, as they all have perfect vision. We precompute H ∈ {0, 1}a×a to denote if two given agents are in range.
Then, S[0]ij = 1 if and only if information piece i was said by k, and agents k and j are in hearing range of each other. More formally,
S [0] ij = Aik ·Hkj
S [1] ij = 1 if and only if agent k (the actor we are modeling) heard some agent ` speaking information piece i, and agent j is also in range of agent `. Note that agent k does not need to be in hearing range of agent j. More precisely,
S [1] ij = Ai` ·Hk` ·H`j , for any agent `
S[2] –the interactions between agents not in hearing range of the agent we are modeling– can be estimated in different ways. A conservative approach would be to not estimate interactions we do not witness (S[2] = 0, which we will call MADDPG-ConservativeEncounter (MADDPG-CE)); and another approach would be to assume that every interaction we do not witness results in sharing a piece of information that will maximize the rewards in that immediate turn. We will call this last approach MADDPG-GreedyEncounter (MADDPG-GE). MADDPG-GE assumes agents play optimally, but does not necessarily know all the known information and that could lead to a wrong prediction. This is particularly true during training, as agents may not behave optimally. The computation of S[2] for MADDPG-GE is as follows.
First, we predict the information piece U` that agent ` uttered. MADDPG-GE predicts U` will be the piece that the least number of agents in range know, as it will maximize immediate reward:
U` = arg min i ∑ j (Kij and Hj`) ∈ {1, . . . c}
With this prediction, agent j will know information i if at least one agent in its range said it:
S [2] ij = 1 if exists ` 6= k such that U` = j and Hj` and j 6= k else 0
MADDPG-EstimatedEncounter (MADDPG-EE) MADDPG-CE and MADDPG-GE are two paths to information sharing estimation, but in none of them do we estimate the probability of an agent knowing a specific piece of information. In MADDPG-EstimatedEncounter (MADDPG-EE), known information of other agents is not binary, i.e. Kij ∈ [0, 1]. This added flexibility can avoid making predictions of shared information based upon unreliable information.
MADDPG-EE estimates the probability that an agent j uttered each piece of information (Uj ∈ Rc) by providing the current information of all agents in its range to an MLP:
Uj = softmax(f(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f an MLP
Then, the probability of having heard a specific piece of information will be the complement of not having heard it, which in turn means that none of the agents in range said it. More formally,
S [2] ij = 1− ∏ `,Hj`=1 1− U`
Since MADDPG-EE requires functions to be differentiable, we use a differential approximation of Eq. 2. A pseudocode of MADDPG-EE’s implementation can be found in Section A.4. MADDPGEE solely focuses on first order ToM, and we leave to future work modeling with second order ToM. The structure of the model would be similar but with an order of magnitude more parameters.
7 EXPERIMENTS
7.1 EXPERIMENTAL SETTINGS
In this section, we compare the different algorithms explained in the previous section. The observation space will be constituted of a processed version of the last turn in the episode, to keep the input size controlled. More precisely, the observation space is composed of: the position of all agents, all recharge bases, the current direction each agent is moving towards to and what they communicated in the last turn, the presence of a wall in each of the immediate surroundings, and every agents’ first-hand information. First-hand information is publicly available in our experiments to moderate the difficulty of the setup, but this constraint could also be removed. This simple setting is still partially observable, since the agents cannot hear interactions outside of their hearing range.
We use the reward as our main evaluation metric. This metric indirectly evaluates ToM capabilities, since information-seeking is at the core of SymmToM. We train through 60000 episodes, and with 7 random seeds to account for high variances in the rewards obtained. Our policies are parametrized by a two-layer ReLU MLP with 64 units per layer, as in the original MADDPG (Lowe et al., 2017). MADDPG-EE’s function f is also a two-layer ReLU MLP with 64 units per layer.
We test two board sizes (w = 6, and w = 12), two numbers of agents (a = 3 and a = 4), and three quantities of information pieces (c = a, c = 2a c = 3a). The length of each episode is set to 5w. More detail about design decisions can be found in Section A.5.
7.2 MAIN RESULTS
As we can observe in Table 1, there is a significant difference in performance between MADDPGOracle and MADDPG (MADDPG-Oracle is 123% better on average): this confirms that developing ToM and recurrence is vital to perform successfully in SymmToM. MADDPG-Oracle is often not an upper bound: when c > a, the heuristic performs better (101% on average). This shows that even with perfect information, it can be difficult to learn the optimal policy using MADDPG. Moreover, models with a recurrence mechanism perform significantly better than MADDPG (61% better on average), also showing that remembering past information gives a notable advantage. As expected, having recurrent models tailored to our problem resulted in better performance than a general LSTM recurrence (RMADDPG). The performance of the best of the tailored models (MADDPGCE, MADDPG-GE, MADDPG-EE) was 42% better on average than plain RMADDPG. LSTM was able to surpass the best of the tailored models only for a = 3, w = 12, c = 3a.
Increasing c generally decreases global rewards for learned agents (on average, rewards for c = 2a are 75.54% of those for c = a, and rewards for c = 3a are 75.66% of those for c = a).
This suggests that probabilistic decisions are harder to learn, or impossible to successfully navigate when several events are equally likely. MADDPG-EE did not show improvements over the other learned agents, and in some cases decreased its performance more heavily than other learned agents (e.g. w = 6, c = 3a). MADDPG-EE uses an MLP in its definition of S[2], which gives more flexibility but also implies a more complex function to learn. We leave to future work to explore other probabilistic agents, but the significant difference in performance between all of the learned models and the highest performing ones (MADDPG-Oracle and the heuristic) shows there is ample space for improvement in this task, and hence proves SymmToM to be a simple yet unsolved benchmark.
Increasing a results in a 10% reduction of performance on average for learned models. Nonetheless, the heuristic improved its rewards by an average of 46%, given the larger opportunities for rewards when including an additional listener. Overall, this implies that increasing a also makes the setup significantly more difficult. Finally, increasing w did not have a conclusive result: for a = 4 it consistently decreased performance in ∼17%, but for a = 3 we saw an improvement of 16% and 61% for c = a and c = 2a respectively.
In sum, modifying c and a provides an easy way of making a setting more difficult without introducing additional rules.
7.3 DISCUSSION
In the past section we analyzed results using the metric of episode rewards. Although the rewards in SymmToM are designed to correlate with information seeking and knowledge state of the agent themselves and others, they do not show explicitly if agents are exhibiting theory of mind. To do so more directly, we develop two categories of possible analyses: scenarios specifically designed to test theory of mind, and post-hoc analyses of episodes.
A classic example of a scenario specifically designed to test ToM behavior is the Sally-Anne task (Wimmer & Perner, 1983). This false belief task, originally designed for children, aims to test if a passive observer can answer questions about the beliefs of another person, in situations where that belief may not match reality. If we were to use it for machine ToM, we could repeat the experiment and ask an agent to predict the position of an object varying the underlying conditions. This test is feasible because there is only one agent with freedom of action, which ensures that desired conditions are met every time. Testing becomes unfeasible when giving multiple agents freedom of action, as constraints planned in test design may be broken by a collective drift from the strategy thought by the designer. Testing becomes easier if we allow for controlling all agents but one, as shown in Fig. 3. Other tests besides the ones shown may be designed. In particular, in Fig. 3d we show an example of probabilistic ToM where two communicative events are equally likely, but one could modify this scenario to have different probabilities and test the expected value of the turns until red successfully shares an information piece. One could also design retroactive deduction tests: for example, in Fig. 3d if red communicates and receives no reward, it can deduce that green had
received that information from blue. If there had been another agent (let’s say, a yellow agent) in range of blue when it spoke to green, the red agent could also update its knowledge about yellow. Results for the tests proposed in Figure 3 are detailed in Appendix A.1.
Post-hoc analysis also has its challenges in multi-agent settings, even in the most direct cases. Thanks to our reward shaping, using recharge bases is always the optimal move when an agent has all the information available: an agent will have a reward of (n−1)c for using the base, whereas it can only gain up to n− 1 + c− 1 per turn if it decides not to use it. Even in this case, small delays in using the base may occur, for example if the agent can gather additional rewards on its path to the base. More generally, having multiple agents makes a specific behaviors attributable to any of the several events happening at once, or a combination of them.
Even though it may be difficult to establish causality when observing single episodes, we developed metrics that comparatively show which models are using specific features of the environment better than others. Examples of metrics are unsuccessful recharge base usage count; number of times an agent shares an information piece everyone in its range already knows when having better alternatives; number of times an agent moves away from every agent when not having all the information pieces available; among others. See Appendix A.2 for detailed description and results. Reward can also be understood as a post-hoc metric with a more indirect intepretation.
Post-hoc analyses of single episodes can also be blurred by emergent communication. Because agents were trained together, they may develop special meaning assignment to particular physical movements or messages. Even though qualitatively this does not seem to be the case for the models presented in the paper, tests should also account for future developments. This also implies that one should not overinterpret small differences in the metrics described in the paragraph above.
8 CONCLUSIONS AND FUTURE WORK
We defined a framework to analyze machine theory of mind in a multi-agent symmetric setting, a more realistic setup than the tasks currently used in the community. Based on the four properties needed for symmetric theory of mind to arise, we provided a simplified setup on which to test the problem, and we showed we can easily increase difficulty by growing the number of agents or communication pieces. Our main goal in this work was not to solve symmetric theory of mind, but rather to give a starting point to explore more complex models in this area. We showed that even with this minimal set of rules, SymmToM proves algorithmically difficult for current multi-agent deep reinforcement learning models, even when tailoring them to our specific task. We leave to
future work to develop models that handle second-order theory of mind and beyond, and models that periodically reevaluate past turns to make new deductions with information gained a posteriori (i.e., models that pass retroactive deduction tests). Another interesting direction would be to replace the information pieces with constrained natural language: communication sharing in our task is binary, whereas in language there is flexibility to communicate different subsets of a knowledge base using a single sentence. We will make our codebase public upon publication, that also includes additional observation space restrictions to increase difficulty.
ETHICS AND REPRODUCIBILITY STATEMENT
Theory of mind research at its core deals with understanding the mental states of other individuals. In the present work we focused on collaborative machine theory of mind, which entails interactions only between artificial agents, and only in scenarios where every party involved has the same incentive structure. This design decision is intentional. Other approaches to theory of mind research could include scenarios with human-agent interaction, which could potentially lead to agents learning to model human players’ mental states. This is not concerning per se, but in scenarios where players do not have the same incentive structure, it could lead to agents learning to deceive other players (potentially human players). The state of the art in machine theory of mind is still far away from these capabilities, but we believe that experiment design choices should always take this matter into account.
Regarding reproducibility, we will make all code public upon acceptance, including the environment and the models’ code. The exact set of parameters used for training will also be shared. Multi-agent reinforcement learning models do have high variance, so models should be run several times to see similar confidence intervals as the ones shown in Fig. 5 and Fig. 6.
A APPENDIX
A.1 AD-HOC THEORY OF MIND TESTS
We test on the four examples shown in Figure 3, adapting the examples to fit one of the grid sizes we already experimented on. For the tests described in Figure 3a and Figure 3b, we test two different grid sizes: w = 6 and w = 12. For the tests described in Figure 3c and Figure 3d we only test w = 12 and w = 6 respectively. Image depictions of the exact test configurations can be seen in Figure 4.
We measure three metrics: average success rate, average failure rate, and ratio of average turns to succeed vs. optimum (RATSO). Note that Average Success Rate and Average Failure Rate do not necessarily sum 1 since these two metrics only include trials where the agent reached any of the two proposed outcomes. If, for example, the agent never moved from the starting point, the trial would not be counted positively towards Avg. Success Rate or Avg. Failure Rate. In addition, ratio of average turns to succeed vs. optimum (RATSO) is the ratio between the average turns it took to succeed in successful trials, and the optimum number of turns to succeed in a specific trial.
For the tests in Figure 4a, 4b, 4d, and 4e, the trial ends when the red agent reaches the hearing range of one of the two possible target agents. The test depicted in Figure 4f is a pass/fail test: if red moves suboptimally at any point before meeting blue, the trial is declared as failed. This makes it a particularly difficult test to pass at random. Because of the nature of this second order ToM test, we only report the average success rate. Finally, for the probabilistic ToM test we want to measure how fast can red communicate all the information it has to green. The optimal number of turns is 1.5 (as discribed in Figure 3), and because of the nature of this test we will only report RATSO.
All results can be found in Table 2. As expected, a larger average success rate correlates with higher reward models (MADDPG-CE and MADDPG-GE are the best models), suggesting that the
reward is a valuable overall metric. The low average success rates across all tests show there is significant room to improve in this benchmark. Success rate drops sharply when increasing the grid size, suggesting larger grids impose more difficult training settings. In this analysis, we used models that were trained specifically for each parameter combination.
Results for Oracle were omitted since some tests assumed no knowledge about communication (e.g. even though agents in Figure 4b do not communicate during the test, the test was designed to test if the red agent assumed them to be).
As we emphasized in the main text, many more tests can be proposed. The code base we will release allows for easily adding new tests to the suite.
A.2 POST-HOC ANALYSES
We developed several post hoc metrics to analyze specific aspects of our models.
• Unsuccessful recharge base usage rate: Average times per episode an agent steps on its recharge base without having all the information available (i.e. wrong usage of the recharge base). Note that an agent may step on its base just because it is on the shortest path to another cell. Therefore, a perfect theory of mind agent will likely not have zero on this score; but generally, lower is better. See results in Table 3.
• Wrong communication piece selection count: Average times per episode an agent attempted to say an information they currently do not possess. In these cases, no communication happens. Lower is better. See results in Table 4.
• Useless communication piece selection count: Average times per episode an agent communicated an information piece that everyone in its hearing range already knew, when having a piece of information that at least one agent in its range did not know. Lower is better. See results in Table 5.
• Useless movement: Average times per episode an agent moves away from every agent that does not have the exact same information it has, given that the agent does not currently possess all the information available. This means that the agent is moving away from any possible valuable interaction. Lower is better. See results in Table 6.
All metrics are normalized by number of agents (i.e., they show the score for a single agent). This allows for better comparison between a = 3 and a = 4 settings.
RMADDPG had the worst scores for unsuccessful recharge base use rate and useless communication piece selection count. RMADDPG scored 43% more than Oracle for unsuccessful base usage on average, and 63% more than Oracle on average for usage of a useless communication piece. The best tailored models (MADDPG-CE and MADDPG-GE) performed similarly to Oracle on average for these two metrics. In contrast, MADDPG-CE and MADDPG-GE performed significantly worse than Oracle for the wrong communication piece selection count (48% and 56% more than Oracle on average). This suggests that all models may be making wrong decisions, but RMADDPG is biased towards communicating redundant information whereas MADDPG-CE and MADDPG-EE tend towards not communicating at all (the true effect of trying to communicate something they are not allowed). Further analysis is needed to truly understand if these apparently wrong behaviors were done in turns where the agent had all the information available to make a better move, or if this is their default when they believe they have nothing of value to communicate. A priori RMADDPG bias seems more principled, but it still showed worse performance overall.
No learned model performed particularly better in the useless movement metric (average differences in performance were less than 10%), suggesting that they perform pointless movements in similar frequencies. It is important not to overinterpret small differences in these metrics. For example, a useless movement may be a signal of emergent communication. Furthermore, an agent may communicate something suboptimal for its immediate reward but this move may not affect its expected reward for the trial.
A.3 TRAINING CURVES FOR THREE RANDOM SEEDS COMBINED
A.4 PSEUDOCODE OF MADDPG-EE
Algorithm 1 Actor implementation of MADDPG-EE, approximating K to make it differentiable. Input: observation, A ∈ {0, 1}c×a, agent idx ∈ 0, . . . a− 1, F ∈ {0, 1}c×a, K ∈ {0, 1}c×a, B ∈ {0, 1}a, H ∈ {0, 1}a×a
Make agents not be in their own hearing range, to avoid talking to themselves from the previous turn. This would be problematic when using recharge bases.
H = H − 1a×a
Compute S[0], all the heard information spoken by agent idx: S[0] = copy
a times A[:, agent idx] copy c times H[agent idx, :]
Compute S[1], all heard information by agent idx, spoken by all agents: S[1] = (A copy
C times H[agent idx, :]) ·H
Compute S[2], an estimation of information pieces communicated between agents that were out of agent idx’s hearing range:
Uj = softmax(f1(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f1 an MLP S [2] ij = 1− ∏ `,Hj`=1 1− U` for all i ∈ {0, . . . , c− 1} and j ∈ {0, . . . , a− 1}
S = S[0] + S[1] + S[2]
Ei = 1sum(K[:,i])=c ∈ {0, 1}a for all i ∈ {0, . . . , c− 1}
K = step ( F · 100 +K + S − 2 · copy
c times (B E) ) return softmax(f2([observation K])), K where f2 is an MLP
A.5 EXPERIMENT DESIGN DECISIONS AND CONSIDERATIONS ABOUT RESULT PRESENTATION
All our experiments are with h = 1: only the immediate neighbors of an agent will hear what they communicate.
We tested with a = 3 and a = 4 and not larger numbers of agents, as the training time increases quadratically with a; also, the intrinsic difficulty of larger setup –even with perfect information– would possibly degrade performance to the point of making it impossible to compare models.
Running experiments with the same number of turns for every setting would imply that agents can move less in combinations with larger values of w, hence the need of making it proportional to the size of the grid. Since the duration of the experiment is directly proportional to the length of the episodes, we settled on a small multiplier. 5w allows agents to move to each edge of the grid and back to the center. A similar decision is required when choosing c: having a constant number of information pieces when increasing the number of agents would make the problem easier, as each agent would have fewer options of first-hand information pieces.
We decided to evaluate running additional episodes over the best checkpoints of each model because there was high variance for some runs, and drops in performance after achieving the highest rewards. Those results are the base of the discussion and can be seen in Table 1. Still, we share the training curves so that the reader can observe these behaviors in Fig. 5 and Fig. 6.
We used the same hyperparameters as the ones used in MADDPG, except with a reduced learning rate and tau (lr = 0.001 and τ = 0.005). We used the same parameters for all our experiments. | 1. What is the main contribution of the paper regarding symmetric TOM?
2. What are the strengths and weaknesses of the proposed method, particularly in comparison to other baselines?
3. How does the paper's problem correspond to practical problems, and what is its significance in multi-agent decision-making settings?
4. Why does the paper focus on forgetting information in information-seeking behavior, and how does it relate to the task's infinite horizon and utility dependence on other agents' state and actions?
5. Is there a connection between human-agent interaction and centralized planning mechanisms, and if so, how would the authors base their framework on decentralized planning frameworks or game-theoretic formulations?
6. What is the relevance of related works in explanation in deterministic planning setting, epistemic reasoning, DEC-POMDPs, I-POMDPS, and multi-agent planning frameworks to the paper's topic? | Summary Of The Paper
Review | Summary Of The Paper
The paper focuses on establishing the framework of symmetric TOM. Paper identifies such decision-making settings as being characterized by four different properties, namely, symmetric action space, imperfect information, observation of others, and information-seeking behavior. The paper then introduces a sample task that instantiates the symmetric TOM setting and then formulates some solution approaches for the setting that builds on an earlier multi-agent actor-critic framework, with an unmodified version being the baseline. The proposed method, particularly the GreedyEncounter variant, seems to perform better than some of the other baselines, but not as well as the heuristic version.
Review
The problem of imbuing systems with a theory of mind is an interesting one and one that is getting attention in multiple areas in AI. Many works argue for not only modeling the beliefs and knowledge of other agents but also argue that in many scenarios where the agent is interacting with humans, it would need to explicitly take into account what the human expects from the agents (the authors could check works done in explanation in deterministic planning setting for some related work [1]). While the general direction is interesting, I don’t think the paper is ready for publication as the current paper doesn’t make a convincing argument for the novelty or significance of the current results.
Unfortunately, the current version of the paper doesn’t contain a related work section. This is a particularly glaring omission for a paper that is looking at as widely studied a problem as modeling of other agents for multi-agent decision-making settings. Apart from works that frame such modeling of other agents as building theories of mind, which generally happens when there is a human in the setting or if the work is inspired by the psychological phenomena, many (if not most) multi-agent planning itself allows some such form of modeling (even ignoring all the various game-theoretic formulations). Not to mention, there is a pretty mature and well-investigated sub-area of reasoning and planning called epistemic reasoning that has looked at the problem of modeling beliefs and knowledge of other agents for a long time (authors can check [2] for a useful starting point).
A related work section helps the reader compare the contributions of a specific paper against all the previous works that have been done in related areas. For example, in the paper, there was no discussion on how the current problem being proposed compares to multi-agent planning frameworks like DEC-POMDPs [3] or I-POMDPS [4]. Particularly since one of the baselines used in the paper is specifically a solution method for DEC-POMDPs. To the best of my understanding, the specific scenario discussed in the paper seems to be a special case of a DEC-MDP if you allow the facts that the agent collects to be modeled as part of the state. If that is not the case, then the paper should explain why it is not so.
This brings me to the specific problem being studied in the paper. Why is this problem of particular interest? Does it correspond to any particular practical problems? The methods introduced here leverage specifics of the problem and as such, the usefulness of methods are also limited by the utility of the setting. Additionally, the paper doesn’t provide any complexity analysis of this particular problem type or an optimal solution. It was also not clear to me why forgetting information is particularly important for information-seeking behavior. If a given agent has partial observability, the task is the infinite horizon and its utility depends on the other agent’s state and actions, you could always create a setting where the optimal strategy for the agent involves seeking out the information about other agents' state.
Also, the beginning of the paper makes some connections to human-agent interaction, if this is one of the problems the authors are interested in, then they are generally incompatible with any centralized planning mechanisms. Since in most cases you can’t plan for the human. It might make more sense for the authors to base their framework on decentralized planning frameworks like I-POMDPs or some game-theoretic formulation.
[1] Sreedharan, Sarath, Tathagata Chakraborti, and Subbarao Kambhampati. "Foundations of explanations as model reconciliation." Artificial Intelligence 301 (2021): 103558.
[2] Fagin, Ronald, et al. Reasoning about knowledge. MIT press, 2003.
[3] Oliehoek, Frans A., and Christopher Amato. A concise introduction to decentralized POMDPs. Springer, 2016.
[4] Gmytrasiewicz, Piotr J., and Prashant Doshi. "A framework for sequential planning in multi-agent settings." Journal of Artificial Intelligence Research 24 (2005): 49-79. |
ICLR | Title
Symmetric Machine Theory of Mind
Abstract
Theory of mind (ToM), the ability to understand others’ thoughts and desires, is a cornerstone of human intelligence. Because of this, a number of previous works have attempted to measure the ability of machines to develop a theory of mind, with one agent attempting to understand anothers’ internal “mental state”. However, ToM agents are often tested as passive observers or in tasks with specific predefined roles, such as speaker-listener scenarios. In this work, we propose to model machine theory of mind in a more flexible and symmetric scenario; a multiagent environment SymmToM where all agents can speak, listen, see other agents, and move freely through a grid world. An effective strategy to solve SymmToM requires developing theory of mind to maximize each agent’s rewards. We show that multi-agent deep reinforcement learning models that model the mental states of other agents achieve significant performance improvements over agents with no such ToM model. At the same time, our best agents fail to achieve performance comparable to agents with access to the gold-standard mental state of other agents, demonstrating that the modeling of theory of mind in multi-agent scenarios is very much an open challenge.
1 INTRODUCTION
Human communication is shaped by the desire to efficiently cooperate and achieve communicative goals (Tomasello, 2009). Children learn from a young age that the others they interact with have independent mental states, and therefore communicating is necessary to obtain information from or shape the intentions of those they interact with. Remembering and reasoning over others’ mental states ensures efficient communication by avoiding having to repeat information, and in cases where cooperation is involved contributes to achieving a common goal with minimal effort.
Because of this, there is growing interest in developing agents that can exhibit this kind of behavior, referred to as Theory of Mind (ToM) by developmental psychologists (Premack & Woodruff, 1978).1 Previous work on agents imbued with some capability of ToM has focused mainly on two types of tasks. The former are tasks where the agent is a passive observer of a scene that has to predict the future by reasoning over others’ mental states. These tasks may involve natural language (Nematzadeh et al., 2018) or be purely spatial (Gandhi et al., 2021; Rabinowitz et al., 2018; Baker et al., 2011). The latter are tasks where the ToM agent has a specific role, such as “the speaker” in speaker-listener scenarios (Zhu et al., 2021).
In contrast, human cooperation and communication is very often multi-party, and rarely assumes that people have pre-fixed roles. Moreover, human interlocutors are seldom passive observers of a scene but instead proactively interact with their environment. Since previous domains limited us to research questions where most parties involved did not have an active role, we developed a more flexible environment where we can now study what happens when all participants must act as both speaker and listener. In this paper, we present SymmToM, a fully symmetric multi-agent environment where all agents can see, hear, speak, and move, and are active players of a simple information-gathering game. To solve SymmToM, agents need to exhibit different levels of ToM, as well as efficiently communicate through a simple channel with a fixed set of symbols.
1In the present work we focus solely on reasoning over mental states. Other aspects of ToM include understanding preferences, goals, and desires of others. Multi-agent scenarios for inferring agent’s goals have been studied (Ullman et al., 2009), and passive-observer benchmarks (Gandhi et al., 2021; Shu et al., 2021; Netanyahu* et al., 2021) have been proposed for evaluating understanding of agent’s goals and preferences.
SymmToM is a partially observable setting for all agents: even when agents have full vision, hearing may be limited. This also differentiates SymmToM from prior work, as modeling may require probabilistic theory of mind. In other words, agents need to not only remember and infer other agents’ knowledge based on what they saw, but also estimate the probability that certain events happened. This estimation may be performed by assuming other agents’ optimal behavior and processing the partial information available. Despite its simplicity, SymmToM fulfills the properties required for symmetric ToM to arise, which will be discussed in the following section.
We find that SymmToM cannot be completely solved neither by using well-known multi-agent deep reinforcement learning (RL) models, nor by tailoring those models to our task. We show that even maintaining the simple rules of the environment, modifying its parameters results in much more difficult challenges, even for models where we artificially introduce perfect information. We discuss examples where different levels of theory of mind are required to solve the task, and possible metrics.
2 THEORY-OF-MIND AGENTS
A belief Theory-of-Mind agent can be defined as a modification of the standard multi-agent RL paradigm, where the agents’ policies are conditioned on their beliefs about others. Formally, we define a reinforcement learning problem M as a tuple of a state space S, action space A, state transition probability function T ∈ S × A → R, and reward function R ∈ S × A → R, i.e.M := 〈S,A, T,R〉. In this setting, an agent learns a (possibly probabilistic) policy π : S → A that maps from states to actions, with the goal of maximizing reward.
In a multi-agent RL setting each agent can potentially have its own state space, action space, transition probabilities, and reward function, so we can define an instance ofMi = 〈Si,Ai, Ti, Ri〉 for each agent i. For convenience, we can also define a joint state space S = ⋃ i Si that describes the entire world in which all agents are interacting. Importantly, in this setting each agent will have its own view of the entirety of the world, described by a conditional observation function ωi : S → Ωi that maps from the state of the entire environment to only the information observable by agent i.
As elaborated above, ToM is the ability to know (and act upon) the knowledge that an agent has. Agents with no ToM will follow a policy that depends only on their current (potentially partial or noisy) observation of their environment: πi(ai,t | ωi(st)). Agents with zeroth order ToM can reason over their own knowledge. These agents will be stateful, πi(· | ωi(st), h(i)t ), where h (i) t is i’s hidden state. Hidden states are always accessible to their owner, i.e. i has access to h(i)t .
Agents with capabilities of reasoning over other agents’ mental states will need to estimate h(j)t for j 6= i. We will denote the estimation that i does of j’s mental state in time t as ĥ(i,j)t :
πi(· | ωi(st), h(i)t , ĥ (i,1) t , . . . ĥ (i,i−1) t , ĥ (i,i+1) t . . . ĥ (n) t )
How do we estimate ĥ(i,j)t ? As a function of i’s (the predicting agent) previous hidden state t−1, i’s observation in t−1, and i’s prediction of the hidden states of every agent in the previous turn:
ĥ (i+1) t = f(h (i) t−1, ωi(st−1), ĥ (i,1) t−1 , . . . ĥ (i,i−1) t−1 , ĥ (i,i+1) t−1 . . . ĥ (i,n) t−1 )
i’s prediction of other agents’ observation in t−1 is also crucial, but not explicitly mentioned since it can be computed using ωi(st−1). For the initial turn, ĥ (i,j) 0 may be initialized differently depending on the problem: if initial knowledge is public, ĥ(i,j)0 is trivial; if not, ĥ (i,j) 0 may be estimated.
3 SYMMETRIC THEORY-OF-MIND
We define symmetric theory of mind environments as settings where theory of mind is required to perform a task successfully, and all agents have the same abilities. There are at least four defining characteristics of symmetric ToM to arise:
Symmetric action space. In symmetric ToM all agents are required to have the same action space (in contrast to, for example, ToM tasks in speaker-listener settings). Concretely,Ai = Aj 6= ∅ ∀i, j.
Imperfect information. In perfect information scenarios all knowledge is public, making it impossible to have agents with different mental states. In ToM tasks in general, there could be a subset of agents with perfect information: one example would a passive observer that needs to predict future behavior. In symmetric ToM, since all agents have the same abilities and roles, all agents must have imperfect information. More precisely, ωi must not be the identity for any agent i.
Observation of others. Agents must have at least partial information of another agent to estimate its mental state. In contrast to passive-observer settings, in symmetric ToM every agent must be able to partially observe all others. More precisely, ωi must observe at least partial information about s (j) t (the subset of st that refers to agent j), although we do not require s (j) t 6= ∅ in every single turn. Moreover, if communication is allowed, it is desirable to partially observe or infer interactions between two or more agents to develop second order ToM (i.e. predicting what an agent thinks about what another agent is thinking) or higher.
Information-seeking behavior. It should be relevant for successfully performing the task to gather as much information as possible, and this information-gathering should involve some level of reasoning over other agent’s knowledge. This is true for first-order ToM tasks in general, and could be formalized as π∗ 6= π for any zeroth-order ToM policy πi(· | ωi(st), h(i)t ). Furthermore, it would be desirable to design a task with perpetual information seeking behavior, since it would ensure that all agents have an incentive to play efficiently even in long episodes. If one wants to design a task with perpetual information-seeking and finite knowledge, information must be forgotten eventually. A forgetting mechanism could be implemented as an explicit loss of knowledge under specific conditions, or make remembrances less reliable or noisy. Moreover, this introduces the concept of information staleness. Since information is not cumulative and the environment is only partially observable, agents will need to estimate whether what they knew to be true still holds in the present.
4 THE SYMMTOM ENVIRONMENT
SymmToM is an environment where a agents are placed in a w×w grid world, and attempt to maximize their reward by gathering all the information available in the environment. There are c available information pieces, that each agent may or may not know initially. Information pieces known at the start of an episode are referred to as first-hand information. Each turn, agents may move through the grid to one of its four neighboring cells, and may speak exactly one of their currently known information pieces. More precisely, the action space of agent j is defined as follows:
Aj = {left, right, up, down, no movement} × {1, . . . , c} (1)
When an agent utters an information piece, it is heard by every agent in its hearing range (an h×h grid centered in each agent, with h < 2w−1). The agents who heard the utterance will be able
to share this newly-learned information with others in following turns. We refer to this as secondhand information, since it is learned –as opposed to first-hand information, given at the start of each episode. The state space is comprised of the position of the agents and their current knowledge:
S = {{(pi, ki), for i ∈ {1, . . . , a}} where pi ∈ {1, . . . , w} × {1, . . . , w}, and ki ∈ {0, 1}c}
Each agent aims to maximize their individual reward Ri via information seeking and sharing. Rewards are earned by hearing a new piece of information, giving someone else a new piece of information, or correctly using recharge bases. Recharge bases are special cells where agents can reset their knowledge in exchange for a large reward (e.g. (n− 1)c times the reward for listening to or sharing new information). Each agent has its own stationary recharge base during an episode. To trigger a base, an agent must step into its designated base having acquired all the available pieces of information, causing the agent to lose all the second-hand information it learned. Recharge bases guarantee that there is always reward to seek in this environment. Concretely, if s = {(pi, ki), for i ∈ {1, . . . , a}} and ai = (adiri , acommi ), we can define the reward as the addition of the reward for hearing new information, giving new information, and using the recharge base:
Ri(s, ai) = ∑ i6=j 1{||pi − pj ||∞ ≤ h and ki,acommj = 0}+ ∑ i 6=j 1{||pi − pj ||∞ ≤ h and kj,acommi = 0}
+ (n− 1) · c · 1{pi = basei and kj = {1, 1, . . . , 1}}
A non-ToM agent can have only limited success in this environment. Without reasoning about its own knowledge (i.e. without zeroth order ToM), it does not know when to use a recharge base. Moreover, without knowledge about other agent’s knowledge (i.e. without first order ToM) it is not possibly to know which agents possess the information pieces it is lacking. Even if it accidentally hears information, a non-first-order ToM agent cannot efficiently decide what to utter in response to maximize its reward. Higher order ToM is also often needed in SymmToM, as we will discuss further in Section 7.3.
Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. SymmToM satisfies the desiderata we laid out in the previous section, as we will detail below:
Symmetric action space. As defined in Eq. 1,Ai = Aj for all i, j. Only a subset may be available at a time since agents cannot step outside the grid, speak a piece they have not heard, or move if they would collide with another agent in the same cell, but they all share the same action space.
Imperfect information. Messages sent by agents outside of the hearing range will not be heard. For example, in Fig. 2a green sends a message but it is not heard by anyone, since it is outside of red’s and blue’s range. Hearing ranges are guaranteed not to cover the whole grid, since h < 2w−1.
Observation of others. Agents have perfect vision of the grid, even if they cannot hear what was said outside of their hearing range. Hence, an agent may see that two agents were in range of each
other, and thus probably interacted, but not hear what was communicated. An example of this can be seen in Fig. 2a, where green observes blue and red interacting without hearing what was uttered.
The uncertainty in the observation also differentiates SymmToM from prior work: to solve the task perfectly, an agent needs to assess the probability that other agents outside its hearing range shared a specific piece of information to avoid repetition. This estimation may be performed using the knowledge of what each agent knows (first order ToM), the perceived knowledge of each of the agents in the interaction (second order ToM), as well as higher order ToM.
Information-seeking behavior Rewards are explicitly given for hearing and sharing novel information, guaranteeing information-seeking is crucial in SymmToM. Recharge bases ensure that the optimal solution is not for all agents to accumulate in the same spot and quickly share all the information available; and that the information tracking required is more complex than accumulating past events. Conceptually, with recharge bases we introduce an explicit and observable forgetting mechanism. As discussed in Section 3, this allows for perpetual information seeking and requires information staleness estimation. An example of successful recharge base use is shown in Fig. 2b.
5 BASELINE LEARNING ALGORITHMS AND OTHER BOUNDS
To learn a policy for acting in the multi-agent SymmToM environment, it is a good strategy to use a multi-agent reinforcement learning algorithm. We use MADDPG (Lowe et al., 2017), a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings. In MADDPG, each actor policy receives its observation space as input, and outputs the probability of taking each action.
Notably, actors in MADDPG have no mechanism for remembering past turns. This is a critical issue in SymmToM, as agents cannot remember which pieces they currently know, which ones they shared and to whom, and other witnessed interactions. To mitigate this, it is necessary to add a recurrence mechanism to carry over information from past turns. One option would be to modify the agent policy using a recurrent network like an LSTM, as RMADDPG (Wang et al., 2020) does.
Perfect Information, Heuristic and Lower Bound Models Performance is difficult to interpret without simpler baselines. As a lower bound model we use the original MADDPG, that since it does not have recurrence embedded, should perform worse or equal to any of the modifications described above. We also include an oracle model (MADDPG-Oracle), that does not require theory of mind since it receives the current knowledgeK for all agents in its observation space. The performance of MADDPG-Oracle may not always be achieved, as there could be unobserved communication with multiple situations happening with equal probability. Moreover, as the number of agents and size of the grid increases, current reinforcement learning models may not be able to find an optimal spatial exploration policy; they may also not be capable of inferring the optimal piece of information to communicate in larger settings. In these cases, MADDPG-Oracle may not perform optimally, so we also include a baseline with heuristic agents to compare performance.
Heuristic agents will always move to the center of the board and communicate round-robin all the information pieces they know until they have all the available knowledge. Then, they will move efficiently to their recharge base and come back to the center of the grid, where the process restarts. We must mention that this heuristic is not necessarily the perfect policy, but it will serve as a baseline to note settings where current MARL models fail even with perfect information. Qualitatively, smaller settings have shown to approximately follow a policy like the heuristic just described.
6 DIRECT MODELING OF SYMMETRIC THEORY OF MIND
In contrast to RMADDPG (Wang et al., 2020), we specifically design algorithms for our environment to maximize performance. Intuitively, our model computes a matrix, K ∈ {0, 1}c×a, that reflects the information pieces known by each agent from the perspective of the agent being modeled: Kij reflects if the agent being modeled believes that agent j knows i. K is updated every turn and used as input of the following turn of the agent, obtaining the desired recurrent behavior. K is also concatenated to the usual observation space, to be processed by a two-layer ReLU MLP and
obtain the probability distributions for speech and movement, as in the original MADDPG. There are several ways to approximate K. It is important to note that each agent can only partially observe communication, and therefore it is impossible to perfectly compute K deterministically.
The current knowledge is comprised of first-hand information (the initial knowledge of every agent, F , publicly available) and second-hand information. Second-hand information may have been heard this turn (S, whose computation will be discussed below) or in previous turns (captured in the K received from the previous turn, noted K(t−1)). Additionally, knowledge may be forgotten when an agent steps on a base having all the information pieces. To express this, we precompute a vector B ∈ {0, 1}a that reflects whether each agent is currently on its base; and a vector E ∈ {0, 1}a that determines if an agent is entitled to use their recharge base: Ej = 1∑
i Kij=c for all j ∈ {1, . . . , a}.
We are then able to compute K as follows:
K (t) ij = (Fij or Sij = 1 or K (t−1) ij = 1) and not (Bij and Eij) (2)
F , K(t−1), and B are given as input, but we have not yet discussed the computation of the secondhand information S. S often cannot be deterministically computed, since our setting is partially observable. We will identify three behaviors and then compute S as the sum of the three:
S = S[0] + S[1] + S[2]
For simplicity, we will assume from now on that we are modeling agent k. S[0] will symbolize the implications of the information spoken by agent k: if agent k speaks a piece of information, they thus know that every agent in its hearing range must have heard it (first order ToM). S[1] will symbolize the implications of information heard by k: this includes updating k’s known information (zeroth order ToM) and the information of every agent that is also in hearing range of the speaker heard by k. S[2] will symbolize the estimation of information pieces communicated between agents that are out of k’s hearing range. Since we assume perfect vision, k will be able to see if two agents are in range of each other, but not hear what they communicate (if they do at all).
S[0] and S[1] can be deterministically computed. To do so, it is key to note that every actor knows the set of actions A ∈ {0, 1}c×a performed by each agent last turn, given that those actions were performed in their hearing range. Moreover, each agent knows which agents are in its range, as they all have perfect vision. We precompute H ∈ {0, 1}a×a to denote if two given agents are in range.
Then, S[0]ij = 1 if and only if information piece i was said by k, and agents k and j are in hearing range of each other. More formally,
S [0] ij = Aik ·Hkj
S [1] ij = 1 if and only if agent k (the actor we are modeling) heard some agent ` speaking information piece i, and agent j is also in range of agent `. Note that agent k does not need to be in hearing range of agent j. More precisely,
S [1] ij = Ai` ·Hk` ·H`j , for any agent `
S[2] –the interactions between agents not in hearing range of the agent we are modeling– can be estimated in different ways. A conservative approach would be to not estimate interactions we do not witness (S[2] = 0, which we will call MADDPG-ConservativeEncounter (MADDPG-CE)); and another approach would be to assume that every interaction we do not witness results in sharing a piece of information that will maximize the rewards in that immediate turn. We will call this last approach MADDPG-GreedyEncounter (MADDPG-GE). MADDPG-GE assumes agents play optimally, but does not necessarily know all the known information and that could lead to a wrong prediction. This is particularly true during training, as agents may not behave optimally. The computation of S[2] for MADDPG-GE is as follows.
First, we predict the information piece U` that agent ` uttered. MADDPG-GE predicts U` will be the piece that the least number of agents in range know, as it will maximize immediate reward:
U` = arg min i ∑ j (Kij and Hj`) ∈ {1, . . . c}
With this prediction, agent j will know information i if at least one agent in its range said it:
S [2] ij = 1 if exists ` 6= k such that U` = j and Hj` and j 6= k else 0
MADDPG-EstimatedEncounter (MADDPG-EE) MADDPG-CE and MADDPG-GE are two paths to information sharing estimation, but in none of them do we estimate the probability of an agent knowing a specific piece of information. In MADDPG-EstimatedEncounter (MADDPG-EE), known information of other agents is not binary, i.e. Kij ∈ [0, 1]. This added flexibility can avoid making predictions of shared information based upon unreliable information.
MADDPG-EE estimates the probability that an agent j uttered each piece of information (Uj ∈ Rc) by providing the current information of all agents in its range to an MLP:
Uj = softmax(f(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f an MLP
Then, the probability of having heard a specific piece of information will be the complement of not having heard it, which in turn means that none of the agents in range said it. More formally,
S [2] ij = 1− ∏ `,Hj`=1 1− U`
Since MADDPG-EE requires functions to be differentiable, we use a differential approximation of Eq. 2. A pseudocode of MADDPG-EE’s implementation can be found in Section A.4. MADDPGEE solely focuses on first order ToM, and we leave to future work modeling with second order ToM. The structure of the model would be similar but with an order of magnitude more parameters.
7 EXPERIMENTS
7.1 EXPERIMENTAL SETTINGS
In this section, we compare the different algorithms explained in the previous section. The observation space will be constituted of a processed version of the last turn in the episode, to keep the input size controlled. More precisely, the observation space is composed of: the position of all agents, all recharge bases, the current direction each agent is moving towards to and what they communicated in the last turn, the presence of a wall in each of the immediate surroundings, and every agents’ first-hand information. First-hand information is publicly available in our experiments to moderate the difficulty of the setup, but this constraint could also be removed. This simple setting is still partially observable, since the agents cannot hear interactions outside of their hearing range.
We use the reward as our main evaluation metric. This metric indirectly evaluates ToM capabilities, since information-seeking is at the core of SymmToM. We train through 60000 episodes, and with 7 random seeds to account for high variances in the rewards obtained. Our policies are parametrized by a two-layer ReLU MLP with 64 units per layer, as in the original MADDPG (Lowe et al., 2017). MADDPG-EE’s function f is also a two-layer ReLU MLP with 64 units per layer.
We test two board sizes (w = 6, and w = 12), two numbers of agents (a = 3 and a = 4), and three quantities of information pieces (c = a, c = 2a c = 3a). The length of each episode is set to 5w. More detail about design decisions can be found in Section A.5.
7.2 MAIN RESULTS
As we can observe in Table 1, there is a significant difference in performance between MADDPGOracle and MADDPG (MADDPG-Oracle is 123% better on average): this confirms that developing ToM and recurrence is vital to perform successfully in SymmToM. MADDPG-Oracle is often not an upper bound: when c > a, the heuristic performs better (101% on average). This shows that even with perfect information, it can be difficult to learn the optimal policy using MADDPG. Moreover, models with a recurrence mechanism perform significantly better than MADDPG (61% better on average), also showing that remembering past information gives a notable advantage. As expected, having recurrent models tailored to our problem resulted in better performance than a general LSTM recurrence (RMADDPG). The performance of the best of the tailored models (MADDPGCE, MADDPG-GE, MADDPG-EE) was 42% better on average than plain RMADDPG. LSTM was able to surpass the best of the tailored models only for a = 3, w = 12, c = 3a.
Increasing c generally decreases global rewards for learned agents (on average, rewards for c = 2a are 75.54% of those for c = a, and rewards for c = 3a are 75.66% of those for c = a).
This suggests that probabilistic decisions are harder to learn, or impossible to successfully navigate when several events are equally likely. MADDPG-EE did not show improvements over the other learned agents, and in some cases decreased its performance more heavily than other learned agents (e.g. w = 6, c = 3a). MADDPG-EE uses an MLP in its definition of S[2], which gives more flexibility but also implies a more complex function to learn. We leave to future work to explore other probabilistic agents, but the significant difference in performance between all of the learned models and the highest performing ones (MADDPG-Oracle and the heuristic) shows there is ample space for improvement in this task, and hence proves SymmToM to be a simple yet unsolved benchmark.
Increasing a results in a 10% reduction of performance on average for learned models. Nonetheless, the heuristic improved its rewards by an average of 46%, given the larger opportunities for rewards when including an additional listener. Overall, this implies that increasing a also makes the setup significantly more difficult. Finally, increasing w did not have a conclusive result: for a = 4 it consistently decreased performance in ∼17%, but for a = 3 we saw an improvement of 16% and 61% for c = a and c = 2a respectively.
In sum, modifying c and a provides an easy way of making a setting more difficult without introducing additional rules.
7.3 DISCUSSION
In the past section we analyzed results using the metric of episode rewards. Although the rewards in SymmToM are designed to correlate with information seeking and knowledge state of the agent themselves and others, they do not show explicitly if agents are exhibiting theory of mind. To do so more directly, we develop two categories of possible analyses: scenarios specifically designed to test theory of mind, and post-hoc analyses of episodes.
A classic example of a scenario specifically designed to test ToM behavior is the Sally-Anne task (Wimmer & Perner, 1983). This false belief task, originally designed for children, aims to test if a passive observer can answer questions about the beliefs of another person, in situations where that belief may not match reality. If we were to use it for machine ToM, we could repeat the experiment and ask an agent to predict the position of an object varying the underlying conditions. This test is feasible because there is only one agent with freedom of action, which ensures that desired conditions are met every time. Testing becomes unfeasible when giving multiple agents freedom of action, as constraints planned in test design may be broken by a collective drift from the strategy thought by the designer. Testing becomes easier if we allow for controlling all agents but one, as shown in Fig. 3. Other tests besides the ones shown may be designed. In particular, in Fig. 3d we show an example of probabilistic ToM where two communicative events are equally likely, but one could modify this scenario to have different probabilities and test the expected value of the turns until red successfully shares an information piece. One could also design retroactive deduction tests: for example, in Fig. 3d if red communicates and receives no reward, it can deduce that green had
received that information from blue. If there had been another agent (let’s say, a yellow agent) in range of blue when it spoke to green, the red agent could also update its knowledge about yellow. Results for the tests proposed in Figure 3 are detailed in Appendix A.1.
Post-hoc analysis also has its challenges in multi-agent settings, even in the most direct cases. Thanks to our reward shaping, using recharge bases is always the optimal move when an agent has all the information available: an agent will have a reward of (n−1)c for using the base, whereas it can only gain up to n− 1 + c− 1 per turn if it decides not to use it. Even in this case, small delays in using the base may occur, for example if the agent can gather additional rewards on its path to the base. More generally, having multiple agents makes a specific behaviors attributable to any of the several events happening at once, or a combination of them.
Even though it may be difficult to establish causality when observing single episodes, we developed metrics that comparatively show which models are using specific features of the environment better than others. Examples of metrics are unsuccessful recharge base usage count; number of times an agent shares an information piece everyone in its range already knows when having better alternatives; number of times an agent moves away from every agent when not having all the information pieces available; among others. See Appendix A.2 for detailed description and results. Reward can also be understood as a post-hoc metric with a more indirect intepretation.
Post-hoc analyses of single episodes can also be blurred by emergent communication. Because agents were trained together, they may develop special meaning assignment to particular physical movements or messages. Even though qualitatively this does not seem to be the case for the models presented in the paper, tests should also account for future developments. This also implies that one should not overinterpret small differences in the metrics described in the paragraph above.
8 CONCLUSIONS AND FUTURE WORK
We defined a framework to analyze machine theory of mind in a multi-agent symmetric setting, a more realistic setup than the tasks currently used in the community. Based on the four properties needed for symmetric theory of mind to arise, we provided a simplified setup on which to test the problem, and we showed we can easily increase difficulty by growing the number of agents or communication pieces. Our main goal in this work was not to solve symmetric theory of mind, but rather to give a starting point to explore more complex models in this area. We showed that even with this minimal set of rules, SymmToM proves algorithmically difficult for current multi-agent deep reinforcement learning models, even when tailoring them to our specific task. We leave to
future work to develop models that handle second-order theory of mind and beyond, and models that periodically reevaluate past turns to make new deductions with information gained a posteriori (i.e., models that pass retroactive deduction tests). Another interesting direction would be to replace the information pieces with constrained natural language: communication sharing in our task is binary, whereas in language there is flexibility to communicate different subsets of a knowledge base using a single sentence. We will make our codebase public upon publication, that also includes additional observation space restrictions to increase difficulty.
ETHICS AND REPRODUCIBILITY STATEMENT
Theory of mind research at its core deals with understanding the mental states of other individuals. In the present work we focused on collaborative machine theory of mind, which entails interactions only between artificial agents, and only in scenarios where every party involved has the same incentive structure. This design decision is intentional. Other approaches to theory of mind research could include scenarios with human-agent interaction, which could potentially lead to agents learning to model human players’ mental states. This is not concerning per se, but in scenarios where players do not have the same incentive structure, it could lead to agents learning to deceive other players (potentially human players). The state of the art in machine theory of mind is still far away from these capabilities, but we believe that experiment design choices should always take this matter into account.
Regarding reproducibility, we will make all code public upon acceptance, including the environment and the models’ code. The exact set of parameters used for training will also be shared. Multi-agent reinforcement learning models do have high variance, so models should be run several times to see similar confidence intervals as the ones shown in Fig. 5 and Fig. 6.
A APPENDIX
A.1 AD-HOC THEORY OF MIND TESTS
We test on the four examples shown in Figure 3, adapting the examples to fit one of the grid sizes we already experimented on. For the tests described in Figure 3a and Figure 3b, we test two different grid sizes: w = 6 and w = 12. For the tests described in Figure 3c and Figure 3d we only test w = 12 and w = 6 respectively. Image depictions of the exact test configurations can be seen in Figure 4.
We measure three metrics: average success rate, average failure rate, and ratio of average turns to succeed vs. optimum (RATSO). Note that Average Success Rate and Average Failure Rate do not necessarily sum 1 since these two metrics only include trials where the agent reached any of the two proposed outcomes. If, for example, the agent never moved from the starting point, the trial would not be counted positively towards Avg. Success Rate or Avg. Failure Rate. In addition, ratio of average turns to succeed vs. optimum (RATSO) is the ratio between the average turns it took to succeed in successful trials, and the optimum number of turns to succeed in a specific trial.
For the tests in Figure 4a, 4b, 4d, and 4e, the trial ends when the red agent reaches the hearing range of one of the two possible target agents. The test depicted in Figure 4f is a pass/fail test: if red moves suboptimally at any point before meeting blue, the trial is declared as failed. This makes it a particularly difficult test to pass at random. Because of the nature of this second order ToM test, we only report the average success rate. Finally, for the probabilistic ToM test we want to measure how fast can red communicate all the information it has to green. The optimal number of turns is 1.5 (as discribed in Figure 3), and because of the nature of this test we will only report RATSO.
All results can be found in Table 2. As expected, a larger average success rate correlates with higher reward models (MADDPG-CE and MADDPG-GE are the best models), suggesting that the
reward is a valuable overall metric. The low average success rates across all tests show there is significant room to improve in this benchmark. Success rate drops sharply when increasing the grid size, suggesting larger grids impose more difficult training settings. In this analysis, we used models that were trained specifically for each parameter combination.
Results for Oracle were omitted since some tests assumed no knowledge about communication (e.g. even though agents in Figure 4b do not communicate during the test, the test was designed to test if the red agent assumed them to be).
As we emphasized in the main text, many more tests can be proposed. The code base we will release allows for easily adding new tests to the suite.
A.2 POST-HOC ANALYSES
We developed several post hoc metrics to analyze specific aspects of our models.
• Unsuccessful recharge base usage rate: Average times per episode an agent steps on its recharge base without having all the information available (i.e. wrong usage of the recharge base). Note that an agent may step on its base just because it is on the shortest path to another cell. Therefore, a perfect theory of mind agent will likely not have zero on this score; but generally, lower is better. See results in Table 3.
• Wrong communication piece selection count: Average times per episode an agent attempted to say an information they currently do not possess. In these cases, no communication happens. Lower is better. See results in Table 4.
• Useless communication piece selection count: Average times per episode an agent communicated an information piece that everyone in its hearing range already knew, when having a piece of information that at least one agent in its range did not know. Lower is better. See results in Table 5.
• Useless movement: Average times per episode an agent moves away from every agent that does not have the exact same information it has, given that the agent does not currently possess all the information available. This means that the agent is moving away from any possible valuable interaction. Lower is better. See results in Table 6.
All metrics are normalized by number of agents (i.e., they show the score for a single agent). This allows for better comparison between a = 3 and a = 4 settings.
RMADDPG had the worst scores for unsuccessful recharge base use rate and useless communication piece selection count. RMADDPG scored 43% more than Oracle for unsuccessful base usage on average, and 63% more than Oracle on average for usage of a useless communication piece. The best tailored models (MADDPG-CE and MADDPG-GE) performed similarly to Oracle on average for these two metrics. In contrast, MADDPG-CE and MADDPG-GE performed significantly worse than Oracle for the wrong communication piece selection count (48% and 56% more than Oracle on average). This suggests that all models may be making wrong decisions, but RMADDPG is biased towards communicating redundant information whereas MADDPG-CE and MADDPG-EE tend towards not communicating at all (the true effect of trying to communicate something they are not allowed). Further analysis is needed to truly understand if these apparently wrong behaviors were done in turns where the agent had all the information available to make a better move, or if this is their default when they believe they have nothing of value to communicate. A priori RMADDPG bias seems more principled, but it still showed worse performance overall.
No learned model performed particularly better in the useless movement metric (average differences in performance were less than 10%), suggesting that they perform pointless movements in similar frequencies. It is important not to overinterpret small differences in these metrics. For example, a useless movement may be a signal of emergent communication. Furthermore, an agent may communicate something suboptimal for its immediate reward but this move may not affect its expected reward for the trial.
A.3 TRAINING CURVES FOR THREE RANDOM SEEDS COMBINED
A.4 PSEUDOCODE OF MADDPG-EE
Algorithm 1 Actor implementation of MADDPG-EE, approximating K to make it differentiable. Input: observation, A ∈ {0, 1}c×a, agent idx ∈ 0, . . . a− 1, F ∈ {0, 1}c×a, K ∈ {0, 1}c×a, B ∈ {0, 1}a, H ∈ {0, 1}a×a
Make agents not be in their own hearing range, to avoid talking to themselves from the previous turn. This would be problematic when using recharge bases.
H = H − 1a×a
Compute S[0], all the heard information spoken by agent idx: S[0] = copy
a times A[:, agent idx] copy c times H[agent idx, :]
Compute S[1], all heard information by agent idx, spoken by all agents: S[1] = (A copy
C times H[agent idx, :]) ·H
Compute S[2], an estimation of information pieces communicated between agents that were out of agent idx’s hearing range:
Uj = softmax(f1(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f1 an MLP S [2] ij = 1− ∏ `,Hj`=1 1− U` for all i ∈ {0, . . . , c− 1} and j ∈ {0, . . . , a− 1}
S = S[0] + S[1] + S[2]
Ei = 1sum(K[:,i])=c ∈ {0, 1}a for all i ∈ {0, . . . , c− 1}
K = step ( F · 100 +K + S − 2 · copy
c times (B E) ) return softmax(f2([observation K])), K where f2 is an MLP
A.5 EXPERIMENT DESIGN DECISIONS AND CONSIDERATIONS ABOUT RESULT PRESENTATION
All our experiments are with h = 1: only the immediate neighbors of an agent will hear what they communicate.
We tested with a = 3 and a = 4 and not larger numbers of agents, as the training time increases quadratically with a; also, the intrinsic difficulty of larger setup –even with perfect information– would possibly degrade performance to the point of making it impossible to compare models.
Running experiments with the same number of turns for every setting would imply that agents can move less in combinations with larger values of w, hence the need of making it proportional to the size of the grid. Since the duration of the experiment is directly proportional to the length of the episodes, we settled on a small multiplier. 5w allows agents to move to each edge of the grid and back to the center. A similar decision is required when choosing c: having a constant number of information pieces when increasing the number of agents would make the problem easier, as each agent would have fewer options of first-hand information pieces.
We decided to evaluate running additional episodes over the best checkpoints of each model because there was high variance for some runs, and drops in performance after achieving the highest rewards. Those results are the base of the discussion and can be seen in Table 1. Still, we share the training curves so that the reader can observe these behaviors in Fig. 5 and Fig. 6.
We used the same hyperparameters as the ones used in MADDPG, except with a reduced learning rate and tau (lr = 0.001 and τ = 0.005). We used the same parameters for all our experiments. | 1. What is the main contribution of the paper in the field of multi-agent reinforcement learning?
2. What are the strengths and weaknesses of the proposed approach in extending MADDPG to model agents' knowledge explicitly?
3. How does the reviewer assess the novelty and significance of the SymmToM environment and task?
4. What are some concerns or suggestions regarding the literature review and comparisons with other works in the area of ToM reasoning and multi-agent cooperation?
5. Are there any questions or suggestions regarding the evaluation methodology and results presented in the paper? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a multi-agent environment and task, termed SymmToM, for analyzing machine theory of mind emerged from multi-agent RL training. In SymmToM, agents can see and act in a 2D grid world as well as share information about the world state through communication. Crucially, all agents have the same physical characteristics, hence, "symmetric." In this task, agents gain rewards by hearing or sharing novel knowledge of the 2D grid world, so that the optimal policy would be cooperatively seeking and sharing information among the agents. The proposed approach extends MADDPG to explicitly model the knowledge of each agent and how it would be affected by the shared information. The experimental results show that this extension improves over the generic RNN based extension to MADDPG (i.e., RMADDPG), but is still not performing as well as the simple heuristics-based baseline.
Review
=====Strengths=====
Testing RL agents or any kinds of machine agents' ToM ability is an important and yet understudied problem. When we evaluate the success of multi-agent policies, it is important to test the true social intelligence that comes with the policies. Naturally, that includes ToM reasoning. So this is certainly a welcomed contribution in the area of multi-agent RL in my opinion.
The explicit modeling of agents' knowledge is an interesting extension to MADDPG. It makes sense to use domain knowledge in some cases to improve the RL training if there is a discussion on the limit or possible improvement.
The two kinds of tests proposed in the discussion are very interesting and are to some extent the most important aspect of this study -- evaluating and analyzing the true ToM ability of agents trained in SymmToM that goes beyond reporting a single reward value.
=====Weaknesses=====
The literature is insufficient. There has been a rich history of computational modeling of ToM (e.g., [1,2,3,4]). There have also been multiagent communication and cooperation tasks / environments proposed before (e.g., the particle environment proposed in the MADDPG work, and [5,6]). There should be a more thorough discussion and comparison.
It is also unclear to me why it is necessary to have a symmetric setting. This setting is emphasized in the title and in the main text, but I have not seen the motivation for it.
I do not see a systematic and quantitative evaluation of the two kinds of tests proposed in the discussion section. Am I missing something here? Is it more of a proposal than a completed evaluation?
There should be a discussion on how general and scalable the explicit modeling of knowledge is as an extension to MADDPG or similar multi-agent RL approaches.
References:
[1] T. D. Ullman, C. L. Baker, O. Macindoe, O. Evans, N. D. Goodman and J. B. Tenenbaum (2010), Help or hinder: Bayesian models of social goal inference. In NeurIPS.
[2] Netanyahu, A., Shu, T., Katz, B., Barbu, A., & Tenenbaum, J. B. (2021). PHASE: PHysically-grounded Abstract Social Events for Machine Social Perception. In AAAI.
[3] Zhu, H., Neubig, G., & Bisk, Y. (2021). Few-shot language coordination by modeling theory of mind. In ICML.
[4] Shu, T., Bhandwaldar, A., Gan, C., Smith, K. A., Liu, S., Gutfreund, D., ... & Ullman, T. D. (2021). AGENT: A Benchmark for Core Psychological Reasoning. In ICML.
[5] Das, A., Gervet, T., Romoff, J., Batra, D., Parikh, D., Rabbat, M., & Pineau, J. (2019). Tarmac: Targeted multi-agent communication. In ICML.
[6] Jain, U., Weihs, L., Kolve, E., Rastegari, M., Lazebnik, S., Farhadi, A., ... & Kembhavi, A. (2019). Two body problem: Collaborative visual task completion. In CVPR. |
ICLR | Title
Symmetric Machine Theory of Mind
Abstract
Theory of mind (ToM), the ability to understand others’ thoughts and desires, is a cornerstone of human intelligence. Because of this, a number of previous works have attempted to measure the ability of machines to develop a theory of mind, with one agent attempting to understand anothers’ internal “mental state”. However, ToM agents are often tested as passive observers or in tasks with specific predefined roles, such as speaker-listener scenarios. In this work, we propose to model machine theory of mind in a more flexible and symmetric scenario; a multiagent environment SymmToM where all agents can speak, listen, see other agents, and move freely through a grid world. An effective strategy to solve SymmToM requires developing theory of mind to maximize each agent’s rewards. We show that multi-agent deep reinforcement learning models that model the mental states of other agents achieve significant performance improvements over agents with no such ToM model. At the same time, our best agents fail to achieve performance comparable to agents with access to the gold-standard mental state of other agents, demonstrating that the modeling of theory of mind in multi-agent scenarios is very much an open challenge.
1 INTRODUCTION
Human communication is shaped by the desire to efficiently cooperate and achieve communicative goals (Tomasello, 2009). Children learn from a young age that the others they interact with have independent mental states, and therefore communicating is necessary to obtain information from or shape the intentions of those they interact with. Remembering and reasoning over others’ mental states ensures efficient communication by avoiding having to repeat information, and in cases where cooperation is involved contributes to achieving a common goal with minimal effort.
Because of this, there is growing interest in developing agents that can exhibit this kind of behavior, referred to as Theory of Mind (ToM) by developmental psychologists (Premack & Woodruff, 1978).1 Previous work on agents imbued with some capability of ToM has focused mainly on two types of tasks. The former are tasks where the agent is a passive observer of a scene that has to predict the future by reasoning over others’ mental states. These tasks may involve natural language (Nematzadeh et al., 2018) or be purely spatial (Gandhi et al., 2021; Rabinowitz et al., 2018; Baker et al., 2011). The latter are tasks where the ToM agent has a specific role, such as “the speaker” in speaker-listener scenarios (Zhu et al., 2021).
In contrast, human cooperation and communication is very often multi-party, and rarely assumes that people have pre-fixed roles. Moreover, human interlocutors are seldom passive observers of a scene but instead proactively interact with their environment. Since previous domains limited us to research questions where most parties involved did not have an active role, we developed a more flexible environment where we can now study what happens when all participants must act as both speaker and listener. In this paper, we present SymmToM, a fully symmetric multi-agent environment where all agents can see, hear, speak, and move, and are active players of a simple information-gathering game. To solve SymmToM, agents need to exhibit different levels of ToM, as well as efficiently communicate through a simple channel with a fixed set of symbols.
1In the present work we focus solely on reasoning over mental states. Other aspects of ToM include understanding preferences, goals, and desires of others. Multi-agent scenarios for inferring agent’s goals have been studied (Ullman et al., 2009), and passive-observer benchmarks (Gandhi et al., 2021; Shu et al., 2021; Netanyahu* et al., 2021) have been proposed for evaluating understanding of agent’s goals and preferences.
SymmToM is a partially observable setting for all agents: even when agents have full vision, hearing may be limited. This also differentiates SymmToM from prior work, as modeling may require probabilistic theory of mind. In other words, agents need to not only remember and infer other agents’ knowledge based on what they saw, but also estimate the probability that certain events happened. This estimation may be performed by assuming other agents’ optimal behavior and processing the partial information available. Despite its simplicity, SymmToM fulfills the properties required for symmetric ToM to arise, which will be discussed in the following section.
We find that SymmToM cannot be completely solved neither by using well-known multi-agent deep reinforcement learning (RL) models, nor by tailoring those models to our task. We show that even maintaining the simple rules of the environment, modifying its parameters results in much more difficult challenges, even for models where we artificially introduce perfect information. We discuss examples where different levels of theory of mind are required to solve the task, and possible metrics.
2 THEORY-OF-MIND AGENTS
A belief Theory-of-Mind agent can be defined as a modification of the standard multi-agent RL paradigm, where the agents’ policies are conditioned on their beliefs about others. Formally, we define a reinforcement learning problem M as a tuple of a state space S, action space A, state transition probability function T ∈ S × A → R, and reward function R ∈ S × A → R, i.e.M := 〈S,A, T,R〉. In this setting, an agent learns a (possibly probabilistic) policy π : S → A that maps from states to actions, with the goal of maximizing reward.
In a multi-agent RL setting each agent can potentially have its own state space, action space, transition probabilities, and reward function, so we can define an instance ofMi = 〈Si,Ai, Ti, Ri〉 for each agent i. For convenience, we can also define a joint state space S = ⋃ i Si that describes the entire world in which all agents are interacting. Importantly, in this setting each agent will have its own view of the entirety of the world, described by a conditional observation function ωi : S → Ωi that maps from the state of the entire environment to only the information observable by agent i.
As elaborated above, ToM is the ability to know (and act upon) the knowledge that an agent has. Agents with no ToM will follow a policy that depends only on their current (potentially partial or noisy) observation of their environment: πi(ai,t | ωi(st)). Agents with zeroth order ToM can reason over their own knowledge. These agents will be stateful, πi(· | ωi(st), h(i)t ), where h (i) t is i’s hidden state. Hidden states are always accessible to their owner, i.e. i has access to h(i)t .
Agents with capabilities of reasoning over other agents’ mental states will need to estimate h(j)t for j 6= i. We will denote the estimation that i does of j’s mental state in time t as ĥ(i,j)t :
πi(· | ωi(st), h(i)t , ĥ (i,1) t , . . . ĥ (i,i−1) t , ĥ (i,i+1) t . . . ĥ (n) t )
How do we estimate ĥ(i,j)t ? As a function of i’s (the predicting agent) previous hidden state t−1, i’s observation in t−1, and i’s prediction of the hidden states of every agent in the previous turn:
ĥ (i+1) t = f(h (i) t−1, ωi(st−1), ĥ (i,1) t−1 , . . . ĥ (i,i−1) t−1 , ĥ (i,i+1) t−1 . . . ĥ (i,n) t−1 )
i’s prediction of other agents’ observation in t−1 is also crucial, but not explicitly mentioned since it can be computed using ωi(st−1). For the initial turn, ĥ (i,j) 0 may be initialized differently depending on the problem: if initial knowledge is public, ĥ(i,j)0 is trivial; if not, ĥ (i,j) 0 may be estimated.
3 SYMMETRIC THEORY-OF-MIND
We define symmetric theory of mind environments as settings where theory of mind is required to perform a task successfully, and all agents have the same abilities. There are at least four defining characteristics of symmetric ToM to arise:
Symmetric action space. In symmetric ToM all agents are required to have the same action space (in contrast to, for example, ToM tasks in speaker-listener settings). Concretely,Ai = Aj 6= ∅ ∀i, j.
Imperfect information. In perfect information scenarios all knowledge is public, making it impossible to have agents with different mental states. In ToM tasks in general, there could be a subset of agents with perfect information: one example would a passive observer that needs to predict future behavior. In symmetric ToM, since all agents have the same abilities and roles, all agents must have imperfect information. More precisely, ωi must not be the identity for any agent i.
Observation of others. Agents must have at least partial information of another agent to estimate its mental state. In contrast to passive-observer settings, in symmetric ToM every agent must be able to partially observe all others. More precisely, ωi must observe at least partial information about s (j) t (the subset of st that refers to agent j), although we do not require s (j) t 6= ∅ in every single turn. Moreover, if communication is allowed, it is desirable to partially observe or infer interactions between two or more agents to develop second order ToM (i.e. predicting what an agent thinks about what another agent is thinking) or higher.
Information-seeking behavior. It should be relevant for successfully performing the task to gather as much information as possible, and this information-gathering should involve some level of reasoning over other agent’s knowledge. This is true for first-order ToM tasks in general, and could be formalized as π∗ 6= π for any zeroth-order ToM policy πi(· | ωi(st), h(i)t ). Furthermore, it would be desirable to design a task with perpetual information seeking behavior, since it would ensure that all agents have an incentive to play efficiently even in long episodes. If one wants to design a task with perpetual information-seeking and finite knowledge, information must be forgotten eventually. A forgetting mechanism could be implemented as an explicit loss of knowledge under specific conditions, or make remembrances less reliable or noisy. Moreover, this introduces the concept of information staleness. Since information is not cumulative and the environment is only partially observable, agents will need to estimate whether what they knew to be true still holds in the present.
4 THE SYMMTOM ENVIRONMENT
SymmToM is an environment where a agents are placed in a w×w grid world, and attempt to maximize their reward by gathering all the information available in the environment. There are c available information pieces, that each agent may or may not know initially. Information pieces known at the start of an episode are referred to as first-hand information. Each turn, agents may move through the grid to one of its four neighboring cells, and may speak exactly one of their currently known information pieces. More precisely, the action space of agent j is defined as follows:
Aj = {left, right, up, down, no movement} × {1, . . . , c} (1)
When an agent utters an information piece, it is heard by every agent in its hearing range (an h×h grid centered in each agent, with h < 2w−1). The agents who heard the utterance will be able
to share this newly-learned information with others in following turns. We refer to this as secondhand information, since it is learned –as opposed to first-hand information, given at the start of each episode. The state space is comprised of the position of the agents and their current knowledge:
S = {{(pi, ki), for i ∈ {1, . . . , a}} where pi ∈ {1, . . . , w} × {1, . . . , w}, and ki ∈ {0, 1}c}
Each agent aims to maximize their individual reward Ri via information seeking and sharing. Rewards are earned by hearing a new piece of information, giving someone else a new piece of information, or correctly using recharge bases. Recharge bases are special cells where agents can reset their knowledge in exchange for a large reward (e.g. (n− 1)c times the reward for listening to or sharing new information). Each agent has its own stationary recharge base during an episode. To trigger a base, an agent must step into its designated base having acquired all the available pieces of information, causing the agent to lose all the second-hand information it learned. Recharge bases guarantee that there is always reward to seek in this environment. Concretely, if s = {(pi, ki), for i ∈ {1, . . . , a}} and ai = (adiri , acommi ), we can define the reward as the addition of the reward for hearing new information, giving new information, and using the recharge base:
Ri(s, ai) = ∑ i6=j 1{||pi − pj ||∞ ≤ h and ki,acommj = 0}+ ∑ i 6=j 1{||pi − pj ||∞ ≤ h and kj,acommi = 0}
+ (n− 1) · c · 1{pi = basei and kj = {1, 1, . . . , 1}}
A non-ToM agent can have only limited success in this environment. Without reasoning about its own knowledge (i.e. without zeroth order ToM), it does not know when to use a recharge base. Moreover, without knowledge about other agent’s knowledge (i.e. without first order ToM) it is not possibly to know which agents possess the information pieces it is lacking. Even if it accidentally hears information, a non-first-order ToM agent cannot efficiently decide what to utter in response to maximize its reward. Higher order ToM is also often needed in SymmToM, as we will discuss further in Section 7.3.
Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. SymmToM satisfies the desiderata we laid out in the previous section, as we will detail below:
Symmetric action space. As defined in Eq. 1,Ai = Aj for all i, j. Only a subset may be available at a time since agents cannot step outside the grid, speak a piece they have not heard, or move if they would collide with another agent in the same cell, but they all share the same action space.
Imperfect information. Messages sent by agents outside of the hearing range will not be heard. For example, in Fig. 2a green sends a message but it is not heard by anyone, since it is outside of red’s and blue’s range. Hearing ranges are guaranteed not to cover the whole grid, since h < 2w−1.
Observation of others. Agents have perfect vision of the grid, even if they cannot hear what was said outside of their hearing range. Hence, an agent may see that two agents were in range of each
other, and thus probably interacted, but not hear what was communicated. An example of this can be seen in Fig. 2a, where green observes blue and red interacting without hearing what was uttered.
The uncertainty in the observation also differentiates SymmToM from prior work: to solve the task perfectly, an agent needs to assess the probability that other agents outside its hearing range shared a specific piece of information to avoid repetition. This estimation may be performed using the knowledge of what each agent knows (first order ToM), the perceived knowledge of each of the agents in the interaction (second order ToM), as well as higher order ToM.
Information-seeking behavior Rewards are explicitly given for hearing and sharing novel information, guaranteeing information-seeking is crucial in SymmToM. Recharge bases ensure that the optimal solution is not for all agents to accumulate in the same spot and quickly share all the information available; and that the information tracking required is more complex than accumulating past events. Conceptually, with recharge bases we introduce an explicit and observable forgetting mechanism. As discussed in Section 3, this allows for perpetual information seeking and requires information staleness estimation. An example of successful recharge base use is shown in Fig. 2b.
5 BASELINE LEARNING ALGORITHMS AND OTHER BOUNDS
To learn a policy for acting in the multi-agent SymmToM environment, it is a good strategy to use a multi-agent reinforcement learning algorithm. We use MADDPG (Lowe et al., 2017), a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings. In MADDPG, each actor policy receives its observation space as input, and outputs the probability of taking each action.
Notably, actors in MADDPG have no mechanism for remembering past turns. This is a critical issue in SymmToM, as agents cannot remember which pieces they currently know, which ones they shared and to whom, and other witnessed interactions. To mitigate this, it is necessary to add a recurrence mechanism to carry over information from past turns. One option would be to modify the agent policy using a recurrent network like an LSTM, as RMADDPG (Wang et al., 2020) does.
Perfect Information, Heuristic and Lower Bound Models Performance is difficult to interpret without simpler baselines. As a lower bound model we use the original MADDPG, that since it does not have recurrence embedded, should perform worse or equal to any of the modifications described above. We also include an oracle model (MADDPG-Oracle), that does not require theory of mind since it receives the current knowledgeK for all agents in its observation space. The performance of MADDPG-Oracle may not always be achieved, as there could be unobserved communication with multiple situations happening with equal probability. Moreover, as the number of agents and size of the grid increases, current reinforcement learning models may not be able to find an optimal spatial exploration policy; they may also not be capable of inferring the optimal piece of information to communicate in larger settings. In these cases, MADDPG-Oracle may not perform optimally, so we also include a baseline with heuristic agents to compare performance.
Heuristic agents will always move to the center of the board and communicate round-robin all the information pieces they know until they have all the available knowledge. Then, they will move efficiently to their recharge base and come back to the center of the grid, where the process restarts. We must mention that this heuristic is not necessarily the perfect policy, but it will serve as a baseline to note settings where current MARL models fail even with perfect information. Qualitatively, smaller settings have shown to approximately follow a policy like the heuristic just described.
6 DIRECT MODELING OF SYMMETRIC THEORY OF MIND
In contrast to RMADDPG (Wang et al., 2020), we specifically design algorithms for our environment to maximize performance. Intuitively, our model computes a matrix, K ∈ {0, 1}c×a, that reflects the information pieces known by each agent from the perspective of the agent being modeled: Kij reflects if the agent being modeled believes that agent j knows i. K is updated every turn and used as input of the following turn of the agent, obtaining the desired recurrent behavior. K is also concatenated to the usual observation space, to be processed by a two-layer ReLU MLP and
obtain the probability distributions for speech and movement, as in the original MADDPG. There are several ways to approximate K. It is important to note that each agent can only partially observe communication, and therefore it is impossible to perfectly compute K deterministically.
The current knowledge is comprised of first-hand information (the initial knowledge of every agent, F , publicly available) and second-hand information. Second-hand information may have been heard this turn (S, whose computation will be discussed below) or in previous turns (captured in the K received from the previous turn, noted K(t−1)). Additionally, knowledge may be forgotten when an agent steps on a base having all the information pieces. To express this, we precompute a vector B ∈ {0, 1}a that reflects whether each agent is currently on its base; and a vector E ∈ {0, 1}a that determines if an agent is entitled to use their recharge base: Ej = 1∑
i Kij=c for all j ∈ {1, . . . , a}.
We are then able to compute K as follows:
K (t) ij = (Fij or Sij = 1 or K (t−1) ij = 1) and not (Bij and Eij) (2)
F , K(t−1), and B are given as input, but we have not yet discussed the computation of the secondhand information S. S often cannot be deterministically computed, since our setting is partially observable. We will identify three behaviors and then compute S as the sum of the three:
S = S[0] + S[1] + S[2]
For simplicity, we will assume from now on that we are modeling agent k. S[0] will symbolize the implications of the information spoken by agent k: if agent k speaks a piece of information, they thus know that every agent in its hearing range must have heard it (first order ToM). S[1] will symbolize the implications of information heard by k: this includes updating k’s known information (zeroth order ToM) and the information of every agent that is also in hearing range of the speaker heard by k. S[2] will symbolize the estimation of information pieces communicated between agents that are out of k’s hearing range. Since we assume perfect vision, k will be able to see if two agents are in range of each other, but not hear what they communicate (if they do at all).
S[0] and S[1] can be deterministically computed. To do so, it is key to note that every actor knows the set of actions A ∈ {0, 1}c×a performed by each agent last turn, given that those actions were performed in their hearing range. Moreover, each agent knows which agents are in its range, as they all have perfect vision. We precompute H ∈ {0, 1}a×a to denote if two given agents are in range.
Then, S[0]ij = 1 if and only if information piece i was said by k, and agents k and j are in hearing range of each other. More formally,
S [0] ij = Aik ·Hkj
S [1] ij = 1 if and only if agent k (the actor we are modeling) heard some agent ` speaking information piece i, and agent j is also in range of agent `. Note that agent k does not need to be in hearing range of agent j. More precisely,
S [1] ij = Ai` ·Hk` ·H`j , for any agent `
S[2] –the interactions between agents not in hearing range of the agent we are modeling– can be estimated in different ways. A conservative approach would be to not estimate interactions we do not witness (S[2] = 0, which we will call MADDPG-ConservativeEncounter (MADDPG-CE)); and another approach would be to assume that every interaction we do not witness results in sharing a piece of information that will maximize the rewards in that immediate turn. We will call this last approach MADDPG-GreedyEncounter (MADDPG-GE). MADDPG-GE assumes agents play optimally, but does not necessarily know all the known information and that could lead to a wrong prediction. This is particularly true during training, as agents may not behave optimally. The computation of S[2] for MADDPG-GE is as follows.
First, we predict the information piece U` that agent ` uttered. MADDPG-GE predicts U` will be the piece that the least number of agents in range know, as it will maximize immediate reward:
U` = arg min i ∑ j (Kij and Hj`) ∈ {1, . . . c}
With this prediction, agent j will know information i if at least one agent in its range said it:
S [2] ij = 1 if exists ` 6= k such that U` = j and Hj` and j 6= k else 0
MADDPG-EstimatedEncounter (MADDPG-EE) MADDPG-CE and MADDPG-GE are two paths to information sharing estimation, but in none of them do we estimate the probability of an agent knowing a specific piece of information. In MADDPG-EstimatedEncounter (MADDPG-EE), known information of other agents is not binary, i.e. Kij ∈ [0, 1]. This added flexibility can avoid making predictions of shared information based upon unreliable information.
MADDPG-EE estimates the probability that an agent j uttered each piece of information (Uj ∈ Rc) by providing the current information of all agents in its range to an MLP:
Uj = softmax(f(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f an MLP
Then, the probability of having heard a specific piece of information will be the complement of not having heard it, which in turn means that none of the agents in range said it. More formally,
S [2] ij = 1− ∏ `,Hj`=1 1− U`
Since MADDPG-EE requires functions to be differentiable, we use a differential approximation of Eq. 2. A pseudocode of MADDPG-EE’s implementation can be found in Section A.4. MADDPGEE solely focuses on first order ToM, and we leave to future work modeling with second order ToM. The structure of the model would be similar but with an order of magnitude more parameters.
7 EXPERIMENTS
7.1 EXPERIMENTAL SETTINGS
In this section, we compare the different algorithms explained in the previous section. The observation space will be constituted of a processed version of the last turn in the episode, to keep the input size controlled. More precisely, the observation space is composed of: the position of all agents, all recharge bases, the current direction each agent is moving towards to and what they communicated in the last turn, the presence of a wall in each of the immediate surroundings, and every agents’ first-hand information. First-hand information is publicly available in our experiments to moderate the difficulty of the setup, but this constraint could also be removed. This simple setting is still partially observable, since the agents cannot hear interactions outside of their hearing range.
We use the reward as our main evaluation metric. This metric indirectly evaluates ToM capabilities, since information-seeking is at the core of SymmToM. We train through 60000 episodes, and with 7 random seeds to account for high variances in the rewards obtained. Our policies are parametrized by a two-layer ReLU MLP with 64 units per layer, as in the original MADDPG (Lowe et al., 2017). MADDPG-EE’s function f is also a two-layer ReLU MLP with 64 units per layer.
We test two board sizes (w = 6, and w = 12), two numbers of agents (a = 3 and a = 4), and three quantities of information pieces (c = a, c = 2a c = 3a). The length of each episode is set to 5w. More detail about design decisions can be found in Section A.5.
7.2 MAIN RESULTS
As we can observe in Table 1, there is a significant difference in performance between MADDPGOracle and MADDPG (MADDPG-Oracle is 123% better on average): this confirms that developing ToM and recurrence is vital to perform successfully in SymmToM. MADDPG-Oracle is often not an upper bound: when c > a, the heuristic performs better (101% on average). This shows that even with perfect information, it can be difficult to learn the optimal policy using MADDPG. Moreover, models with a recurrence mechanism perform significantly better than MADDPG (61% better on average), also showing that remembering past information gives a notable advantage. As expected, having recurrent models tailored to our problem resulted in better performance than a general LSTM recurrence (RMADDPG). The performance of the best of the tailored models (MADDPGCE, MADDPG-GE, MADDPG-EE) was 42% better on average than plain RMADDPG. LSTM was able to surpass the best of the tailored models only for a = 3, w = 12, c = 3a.
Increasing c generally decreases global rewards for learned agents (on average, rewards for c = 2a are 75.54% of those for c = a, and rewards for c = 3a are 75.66% of those for c = a).
This suggests that probabilistic decisions are harder to learn, or impossible to successfully navigate when several events are equally likely. MADDPG-EE did not show improvements over the other learned agents, and in some cases decreased its performance more heavily than other learned agents (e.g. w = 6, c = 3a). MADDPG-EE uses an MLP in its definition of S[2], which gives more flexibility but also implies a more complex function to learn. We leave to future work to explore other probabilistic agents, but the significant difference in performance between all of the learned models and the highest performing ones (MADDPG-Oracle and the heuristic) shows there is ample space for improvement in this task, and hence proves SymmToM to be a simple yet unsolved benchmark.
Increasing a results in a 10% reduction of performance on average for learned models. Nonetheless, the heuristic improved its rewards by an average of 46%, given the larger opportunities for rewards when including an additional listener. Overall, this implies that increasing a also makes the setup significantly more difficult. Finally, increasing w did not have a conclusive result: for a = 4 it consistently decreased performance in ∼17%, but for a = 3 we saw an improvement of 16% and 61% for c = a and c = 2a respectively.
In sum, modifying c and a provides an easy way of making a setting more difficult without introducing additional rules.
7.3 DISCUSSION
In the past section we analyzed results using the metric of episode rewards. Although the rewards in SymmToM are designed to correlate with information seeking and knowledge state of the agent themselves and others, they do not show explicitly if agents are exhibiting theory of mind. To do so more directly, we develop two categories of possible analyses: scenarios specifically designed to test theory of mind, and post-hoc analyses of episodes.
A classic example of a scenario specifically designed to test ToM behavior is the Sally-Anne task (Wimmer & Perner, 1983). This false belief task, originally designed for children, aims to test if a passive observer can answer questions about the beliefs of another person, in situations where that belief may not match reality. If we were to use it for machine ToM, we could repeat the experiment and ask an agent to predict the position of an object varying the underlying conditions. This test is feasible because there is only one agent with freedom of action, which ensures that desired conditions are met every time. Testing becomes unfeasible when giving multiple agents freedom of action, as constraints planned in test design may be broken by a collective drift from the strategy thought by the designer. Testing becomes easier if we allow for controlling all agents but one, as shown in Fig. 3. Other tests besides the ones shown may be designed. In particular, in Fig. 3d we show an example of probabilistic ToM where two communicative events are equally likely, but one could modify this scenario to have different probabilities and test the expected value of the turns until red successfully shares an information piece. One could also design retroactive deduction tests: for example, in Fig. 3d if red communicates and receives no reward, it can deduce that green had
received that information from blue. If there had been another agent (let’s say, a yellow agent) in range of blue when it spoke to green, the red agent could also update its knowledge about yellow. Results for the tests proposed in Figure 3 are detailed in Appendix A.1.
Post-hoc analysis also has its challenges in multi-agent settings, even in the most direct cases. Thanks to our reward shaping, using recharge bases is always the optimal move when an agent has all the information available: an agent will have a reward of (n−1)c for using the base, whereas it can only gain up to n− 1 + c− 1 per turn if it decides not to use it. Even in this case, small delays in using the base may occur, for example if the agent can gather additional rewards on its path to the base. More generally, having multiple agents makes a specific behaviors attributable to any of the several events happening at once, or a combination of them.
Even though it may be difficult to establish causality when observing single episodes, we developed metrics that comparatively show which models are using specific features of the environment better than others. Examples of metrics are unsuccessful recharge base usage count; number of times an agent shares an information piece everyone in its range already knows when having better alternatives; number of times an agent moves away from every agent when not having all the information pieces available; among others. See Appendix A.2 for detailed description and results. Reward can also be understood as a post-hoc metric with a more indirect intepretation.
Post-hoc analyses of single episodes can also be blurred by emergent communication. Because agents were trained together, they may develop special meaning assignment to particular physical movements or messages. Even though qualitatively this does not seem to be the case for the models presented in the paper, tests should also account for future developments. This also implies that one should not overinterpret small differences in the metrics described in the paragraph above.
8 CONCLUSIONS AND FUTURE WORK
We defined a framework to analyze machine theory of mind in a multi-agent symmetric setting, a more realistic setup than the tasks currently used in the community. Based on the four properties needed for symmetric theory of mind to arise, we provided a simplified setup on which to test the problem, and we showed we can easily increase difficulty by growing the number of agents or communication pieces. Our main goal in this work was not to solve symmetric theory of mind, but rather to give a starting point to explore more complex models in this area. We showed that even with this minimal set of rules, SymmToM proves algorithmically difficult for current multi-agent deep reinforcement learning models, even when tailoring them to our specific task. We leave to
future work to develop models that handle second-order theory of mind and beyond, and models that periodically reevaluate past turns to make new deductions with information gained a posteriori (i.e., models that pass retroactive deduction tests). Another interesting direction would be to replace the information pieces with constrained natural language: communication sharing in our task is binary, whereas in language there is flexibility to communicate different subsets of a knowledge base using a single sentence. We will make our codebase public upon publication, that also includes additional observation space restrictions to increase difficulty.
ETHICS AND REPRODUCIBILITY STATEMENT
Theory of mind research at its core deals with understanding the mental states of other individuals. In the present work we focused on collaborative machine theory of mind, which entails interactions only between artificial agents, and only in scenarios where every party involved has the same incentive structure. This design decision is intentional. Other approaches to theory of mind research could include scenarios with human-agent interaction, which could potentially lead to agents learning to model human players’ mental states. This is not concerning per se, but in scenarios where players do not have the same incentive structure, it could lead to agents learning to deceive other players (potentially human players). The state of the art in machine theory of mind is still far away from these capabilities, but we believe that experiment design choices should always take this matter into account.
Regarding reproducibility, we will make all code public upon acceptance, including the environment and the models’ code. The exact set of parameters used for training will also be shared. Multi-agent reinforcement learning models do have high variance, so models should be run several times to see similar confidence intervals as the ones shown in Fig. 5 and Fig. 6.
A APPENDIX
A.1 AD-HOC THEORY OF MIND TESTS
We test on the four examples shown in Figure 3, adapting the examples to fit one of the grid sizes we already experimented on. For the tests described in Figure 3a and Figure 3b, we test two different grid sizes: w = 6 and w = 12. For the tests described in Figure 3c and Figure 3d we only test w = 12 and w = 6 respectively. Image depictions of the exact test configurations can be seen in Figure 4.
We measure three metrics: average success rate, average failure rate, and ratio of average turns to succeed vs. optimum (RATSO). Note that Average Success Rate and Average Failure Rate do not necessarily sum 1 since these two metrics only include trials where the agent reached any of the two proposed outcomes. If, for example, the agent never moved from the starting point, the trial would not be counted positively towards Avg. Success Rate or Avg. Failure Rate. In addition, ratio of average turns to succeed vs. optimum (RATSO) is the ratio between the average turns it took to succeed in successful trials, and the optimum number of turns to succeed in a specific trial.
For the tests in Figure 4a, 4b, 4d, and 4e, the trial ends when the red agent reaches the hearing range of one of the two possible target agents. The test depicted in Figure 4f is a pass/fail test: if red moves suboptimally at any point before meeting blue, the trial is declared as failed. This makes it a particularly difficult test to pass at random. Because of the nature of this second order ToM test, we only report the average success rate. Finally, for the probabilistic ToM test we want to measure how fast can red communicate all the information it has to green. The optimal number of turns is 1.5 (as discribed in Figure 3), and because of the nature of this test we will only report RATSO.
All results can be found in Table 2. As expected, a larger average success rate correlates with higher reward models (MADDPG-CE and MADDPG-GE are the best models), suggesting that the
reward is a valuable overall metric. The low average success rates across all tests show there is significant room to improve in this benchmark. Success rate drops sharply when increasing the grid size, suggesting larger grids impose more difficult training settings. In this analysis, we used models that were trained specifically for each parameter combination.
Results for Oracle were omitted since some tests assumed no knowledge about communication (e.g. even though agents in Figure 4b do not communicate during the test, the test was designed to test if the red agent assumed them to be).
As we emphasized in the main text, many more tests can be proposed. The code base we will release allows for easily adding new tests to the suite.
A.2 POST-HOC ANALYSES
We developed several post hoc metrics to analyze specific aspects of our models.
• Unsuccessful recharge base usage rate: Average times per episode an agent steps on its recharge base without having all the information available (i.e. wrong usage of the recharge base). Note that an agent may step on its base just because it is on the shortest path to another cell. Therefore, a perfect theory of mind agent will likely not have zero on this score; but generally, lower is better. See results in Table 3.
• Wrong communication piece selection count: Average times per episode an agent attempted to say an information they currently do not possess. In these cases, no communication happens. Lower is better. See results in Table 4.
• Useless communication piece selection count: Average times per episode an agent communicated an information piece that everyone in its hearing range already knew, when having a piece of information that at least one agent in its range did not know. Lower is better. See results in Table 5.
• Useless movement: Average times per episode an agent moves away from every agent that does not have the exact same information it has, given that the agent does not currently possess all the information available. This means that the agent is moving away from any possible valuable interaction. Lower is better. See results in Table 6.
All metrics are normalized by number of agents (i.e., they show the score for a single agent). This allows for better comparison between a = 3 and a = 4 settings.
RMADDPG had the worst scores for unsuccessful recharge base use rate and useless communication piece selection count. RMADDPG scored 43% more than Oracle for unsuccessful base usage on average, and 63% more than Oracle on average for usage of a useless communication piece. The best tailored models (MADDPG-CE and MADDPG-GE) performed similarly to Oracle on average for these two metrics. In contrast, MADDPG-CE and MADDPG-GE performed significantly worse than Oracle for the wrong communication piece selection count (48% and 56% more than Oracle on average). This suggests that all models may be making wrong decisions, but RMADDPG is biased towards communicating redundant information whereas MADDPG-CE and MADDPG-EE tend towards not communicating at all (the true effect of trying to communicate something they are not allowed). Further analysis is needed to truly understand if these apparently wrong behaviors were done in turns where the agent had all the information available to make a better move, or if this is their default when they believe they have nothing of value to communicate. A priori RMADDPG bias seems more principled, but it still showed worse performance overall.
No learned model performed particularly better in the useless movement metric (average differences in performance were less than 10%), suggesting that they perform pointless movements in similar frequencies. It is important not to overinterpret small differences in these metrics. For example, a useless movement may be a signal of emergent communication. Furthermore, an agent may communicate something suboptimal for its immediate reward but this move may not affect its expected reward for the trial.
A.3 TRAINING CURVES FOR THREE RANDOM SEEDS COMBINED
A.4 PSEUDOCODE OF MADDPG-EE
Algorithm 1 Actor implementation of MADDPG-EE, approximating K to make it differentiable. Input: observation, A ∈ {0, 1}c×a, agent idx ∈ 0, . . . a− 1, F ∈ {0, 1}c×a, K ∈ {0, 1}c×a, B ∈ {0, 1}a, H ∈ {0, 1}a×a
Make agents not be in their own hearing range, to avoid talking to themselves from the previous turn. This would be problematic when using recharge bases.
H = H − 1a×a
Compute S[0], all the heard information spoken by agent idx: S[0] = copy
a times A[:, agent idx] copy c times H[agent idx, :]
Compute S[1], all heard information by agent idx, spoken by all agents: S[1] = (A copy
C times H[agent idx, :]) ·H
Compute S[2], an estimation of information pieces communicated between agents that were out of agent idx’s hearing range:
Uj = softmax(f1(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f1 an MLP S [2] ij = 1− ∏ `,Hj`=1 1− U` for all i ∈ {0, . . . , c− 1} and j ∈ {0, . . . , a− 1}
S = S[0] + S[1] + S[2]
Ei = 1sum(K[:,i])=c ∈ {0, 1}a for all i ∈ {0, . . . , c− 1}
K = step ( F · 100 +K + S − 2 · copy
c times (B E) ) return softmax(f2([observation K])), K where f2 is an MLP
A.5 EXPERIMENT DESIGN DECISIONS AND CONSIDERATIONS ABOUT RESULT PRESENTATION
All our experiments are with h = 1: only the immediate neighbors of an agent will hear what they communicate.
We tested with a = 3 and a = 4 and not larger numbers of agents, as the training time increases quadratically with a; also, the intrinsic difficulty of larger setup –even with perfect information– would possibly degrade performance to the point of making it impossible to compare models.
Running experiments with the same number of turns for every setting would imply that agents can move less in combinations with larger values of w, hence the need of making it proportional to the size of the grid. Since the duration of the experiment is directly proportional to the length of the episodes, we settled on a small multiplier. 5w allows agents to move to each edge of the grid and back to the center. A similar decision is required when choosing c: having a constant number of information pieces when increasing the number of agents would make the problem easier, as each agent would have fewer options of first-hand information pieces.
We decided to evaluate running additional episodes over the best checkpoints of each model because there was high variance for some runs, and drops in performance after achieving the highest rewards. Those results are the base of the discussion and can be seen in Table 1. Still, we share the training curves so that the reader can observe these behaviors in Fig. 5 and Fig. 6.
We used the same hyperparameters as the ones used in MADDPG, except with a reduced learning rate and tau (lr = 0.001 and τ = 0.005). We used the same parameters for all our experiments. | 1. What is the focus and contribution of the paper regarding multi-agent environments?
2. What are the strengths of the proposed SymmToM approach, particularly in its symmetry and adaptability?
3. What are the weaknesses of the paper, especially regarding its concerns about observability, reward functions, experimentation, and organization?
4. Do you have any questions or suggestions regarding the partial observable Markov decision process, recharge bases, and hyperparameter settings?
5. How would you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper suggests a fully symmetric multi-agent environment based on the Theory of Mind, SymmToM. In the SymmTom, agents can remember its information, infer behaviors of other agents, and estimate the probability of others. Interactions between agents are modeled by a partially observable Markov decision process. Each agent decides an action(move, speak) upon its state which consists of partial observations. The authors admit that the proposed SymmTom cannot be completely solved. They adopt MADDPG for performance evaluations.
Review
This work considers a broader range of tasks by removing a pre-defined role of the agent. From the symmetric setting, every agent can play all roles without being biased towards a specific role.
Even though the paper has merit, the reviewer has the following concerns.
The paper adopts the partially observable Markov decision process. But in section 7.1, the observation space is defined by a set that consists of the position of all agents, all recharge bases, every agent’s first-hand information, etc. It does not seem to be a “partial” observable setting. In addition, the assumption that the agent has full vision seems not realistic in a large grid world. How can every agent know the initial knowledge of the other agents?
Recharge bases seem to be important in the proposed environment. However, the process that happened in the recharge bases is not fully explained. Why all the information gathered are removed? It would be better the information pieces are removed depending on the importance, or time.
The paper must include an exact definition of reward. The reward function is not provided in the paper.
The experimental results in Table 1 were written with only 3 runs, which is a very insufficient number of trials to explain the claim. In addition, it is not a good method to judge the performance of a specific algorithm through the average of the results performed under different experimental conditions.
The values in Table 1 do not have enough explanations. Does it represent a ‘per agent’ reward?
The paper should be reorganized. For example, Fig. 2 does not have specific captions(a/b/c), so it is hard to follow. In addition, Fig. 3 has 4 images with different sizes and #1(line 3 in the caption) is not explained.
No parameter tuning and hyperparameter setting methods for experiments |
ICLR | Title
Symmetric Machine Theory of Mind
Abstract
Theory of mind (ToM), the ability to understand others’ thoughts and desires, is a cornerstone of human intelligence. Because of this, a number of previous works have attempted to measure the ability of machines to develop a theory of mind, with one agent attempting to understand anothers’ internal “mental state”. However, ToM agents are often tested as passive observers or in tasks with specific predefined roles, such as speaker-listener scenarios. In this work, we propose to model machine theory of mind in a more flexible and symmetric scenario; a multiagent environment SymmToM where all agents can speak, listen, see other agents, and move freely through a grid world. An effective strategy to solve SymmToM requires developing theory of mind to maximize each agent’s rewards. We show that multi-agent deep reinforcement learning models that model the mental states of other agents achieve significant performance improvements over agents with no such ToM model. At the same time, our best agents fail to achieve performance comparable to agents with access to the gold-standard mental state of other agents, demonstrating that the modeling of theory of mind in multi-agent scenarios is very much an open challenge.
1 INTRODUCTION
Human communication is shaped by the desire to efficiently cooperate and achieve communicative goals (Tomasello, 2009). Children learn from a young age that the others they interact with have independent mental states, and therefore communicating is necessary to obtain information from or shape the intentions of those they interact with. Remembering and reasoning over others’ mental states ensures efficient communication by avoiding having to repeat information, and in cases where cooperation is involved contributes to achieving a common goal with minimal effort.
Because of this, there is growing interest in developing agents that can exhibit this kind of behavior, referred to as Theory of Mind (ToM) by developmental psychologists (Premack & Woodruff, 1978).1 Previous work on agents imbued with some capability of ToM has focused mainly on two types of tasks. The former are tasks where the agent is a passive observer of a scene that has to predict the future by reasoning over others’ mental states. These tasks may involve natural language (Nematzadeh et al., 2018) or be purely spatial (Gandhi et al., 2021; Rabinowitz et al., 2018; Baker et al., 2011). The latter are tasks where the ToM agent has a specific role, such as “the speaker” in speaker-listener scenarios (Zhu et al., 2021).
In contrast, human cooperation and communication is very often multi-party, and rarely assumes that people have pre-fixed roles. Moreover, human interlocutors are seldom passive observers of a scene but instead proactively interact with their environment. Since previous domains limited us to research questions where most parties involved did not have an active role, we developed a more flexible environment where we can now study what happens when all participants must act as both speaker and listener. In this paper, we present SymmToM, a fully symmetric multi-agent environment where all agents can see, hear, speak, and move, and are active players of a simple information-gathering game. To solve SymmToM, agents need to exhibit different levels of ToM, as well as efficiently communicate through a simple channel with a fixed set of symbols.
1In the present work we focus solely on reasoning over mental states. Other aspects of ToM include understanding preferences, goals, and desires of others. Multi-agent scenarios for inferring agent’s goals have been studied (Ullman et al., 2009), and passive-observer benchmarks (Gandhi et al., 2021; Shu et al., 2021; Netanyahu* et al., 2021) have been proposed for evaluating understanding of agent’s goals and preferences.
SymmToM is a partially observable setting for all agents: even when agents have full vision, hearing may be limited. This also differentiates SymmToM from prior work, as modeling may require probabilistic theory of mind. In other words, agents need to not only remember and infer other agents’ knowledge based on what they saw, but also estimate the probability that certain events happened. This estimation may be performed by assuming other agents’ optimal behavior and processing the partial information available. Despite its simplicity, SymmToM fulfills the properties required for symmetric ToM to arise, which will be discussed in the following section.
We find that SymmToM cannot be completely solved neither by using well-known multi-agent deep reinforcement learning (RL) models, nor by tailoring those models to our task. We show that even maintaining the simple rules of the environment, modifying its parameters results in much more difficult challenges, even for models where we artificially introduce perfect information. We discuss examples where different levels of theory of mind are required to solve the task, and possible metrics.
2 THEORY-OF-MIND AGENTS
A belief Theory-of-Mind agent can be defined as a modification of the standard multi-agent RL paradigm, where the agents’ policies are conditioned on their beliefs about others. Formally, we define a reinforcement learning problem M as a tuple of a state space S, action space A, state transition probability function T ∈ S × A → R, and reward function R ∈ S × A → R, i.e.M := 〈S,A, T,R〉. In this setting, an agent learns a (possibly probabilistic) policy π : S → A that maps from states to actions, with the goal of maximizing reward.
In a multi-agent RL setting each agent can potentially have its own state space, action space, transition probabilities, and reward function, so we can define an instance ofMi = 〈Si,Ai, Ti, Ri〉 for each agent i. For convenience, we can also define a joint state space S = ⋃ i Si that describes the entire world in which all agents are interacting. Importantly, in this setting each agent will have its own view of the entirety of the world, described by a conditional observation function ωi : S → Ωi that maps from the state of the entire environment to only the information observable by agent i.
As elaborated above, ToM is the ability to know (and act upon) the knowledge that an agent has. Agents with no ToM will follow a policy that depends only on their current (potentially partial or noisy) observation of their environment: πi(ai,t | ωi(st)). Agents with zeroth order ToM can reason over their own knowledge. These agents will be stateful, πi(· | ωi(st), h(i)t ), where h (i) t is i’s hidden state. Hidden states are always accessible to their owner, i.e. i has access to h(i)t .
Agents with capabilities of reasoning over other agents’ mental states will need to estimate h(j)t for j 6= i. We will denote the estimation that i does of j’s mental state in time t as ĥ(i,j)t :
πi(· | ωi(st), h(i)t , ĥ (i,1) t , . . . ĥ (i,i−1) t , ĥ (i,i+1) t . . . ĥ (n) t )
How do we estimate ĥ(i,j)t ? As a function of i’s (the predicting agent) previous hidden state t−1, i’s observation in t−1, and i’s prediction of the hidden states of every agent in the previous turn:
ĥ (i+1) t = f(h (i) t−1, ωi(st−1), ĥ (i,1) t−1 , . . . ĥ (i,i−1) t−1 , ĥ (i,i+1) t−1 . . . ĥ (i,n) t−1 )
i’s prediction of other agents’ observation in t−1 is also crucial, but not explicitly mentioned since it can be computed using ωi(st−1). For the initial turn, ĥ (i,j) 0 may be initialized differently depending on the problem: if initial knowledge is public, ĥ(i,j)0 is trivial; if not, ĥ (i,j) 0 may be estimated.
3 SYMMETRIC THEORY-OF-MIND
We define symmetric theory of mind environments as settings where theory of mind is required to perform a task successfully, and all agents have the same abilities. There are at least four defining characteristics of symmetric ToM to arise:
Symmetric action space. In symmetric ToM all agents are required to have the same action space (in contrast to, for example, ToM tasks in speaker-listener settings). Concretely,Ai = Aj 6= ∅ ∀i, j.
Imperfect information. In perfect information scenarios all knowledge is public, making it impossible to have agents with different mental states. In ToM tasks in general, there could be a subset of agents with perfect information: one example would a passive observer that needs to predict future behavior. In symmetric ToM, since all agents have the same abilities and roles, all agents must have imperfect information. More precisely, ωi must not be the identity for any agent i.
Observation of others. Agents must have at least partial information of another agent to estimate its mental state. In contrast to passive-observer settings, in symmetric ToM every agent must be able to partially observe all others. More precisely, ωi must observe at least partial information about s (j) t (the subset of st that refers to agent j), although we do not require s (j) t 6= ∅ in every single turn. Moreover, if communication is allowed, it is desirable to partially observe or infer interactions between two or more agents to develop second order ToM (i.e. predicting what an agent thinks about what another agent is thinking) or higher.
Information-seeking behavior. It should be relevant for successfully performing the task to gather as much information as possible, and this information-gathering should involve some level of reasoning over other agent’s knowledge. This is true for first-order ToM tasks in general, and could be formalized as π∗ 6= π for any zeroth-order ToM policy πi(· | ωi(st), h(i)t ). Furthermore, it would be desirable to design a task with perpetual information seeking behavior, since it would ensure that all agents have an incentive to play efficiently even in long episodes. If one wants to design a task with perpetual information-seeking and finite knowledge, information must be forgotten eventually. A forgetting mechanism could be implemented as an explicit loss of knowledge under specific conditions, or make remembrances less reliable or noisy. Moreover, this introduces the concept of information staleness. Since information is not cumulative and the environment is only partially observable, agents will need to estimate whether what they knew to be true still holds in the present.
4 THE SYMMTOM ENVIRONMENT
SymmToM is an environment where a agents are placed in a w×w grid world, and attempt to maximize their reward by gathering all the information available in the environment. There are c available information pieces, that each agent may or may not know initially. Information pieces known at the start of an episode are referred to as first-hand information. Each turn, agents may move through the grid to one of its four neighboring cells, and may speak exactly one of their currently known information pieces. More precisely, the action space of agent j is defined as follows:
Aj = {left, right, up, down, no movement} × {1, . . . , c} (1)
When an agent utters an information piece, it is heard by every agent in its hearing range (an h×h grid centered in each agent, with h < 2w−1). The agents who heard the utterance will be able
to share this newly-learned information with others in following turns. We refer to this as secondhand information, since it is learned –as opposed to first-hand information, given at the start of each episode. The state space is comprised of the position of the agents and their current knowledge:
S = {{(pi, ki), for i ∈ {1, . . . , a}} where pi ∈ {1, . . . , w} × {1, . . . , w}, and ki ∈ {0, 1}c}
Each agent aims to maximize their individual reward Ri via information seeking and sharing. Rewards are earned by hearing a new piece of information, giving someone else a new piece of information, or correctly using recharge bases. Recharge bases are special cells where agents can reset their knowledge in exchange for a large reward (e.g. (n− 1)c times the reward for listening to or sharing new information). Each agent has its own stationary recharge base during an episode. To trigger a base, an agent must step into its designated base having acquired all the available pieces of information, causing the agent to lose all the second-hand information it learned. Recharge bases guarantee that there is always reward to seek in this environment. Concretely, if s = {(pi, ki), for i ∈ {1, . . . , a}} and ai = (adiri , acommi ), we can define the reward as the addition of the reward for hearing new information, giving new information, and using the recharge base:
Ri(s, ai) = ∑ i6=j 1{||pi − pj ||∞ ≤ h and ki,acommj = 0}+ ∑ i 6=j 1{||pi − pj ||∞ ≤ h and kj,acommi = 0}
+ (n− 1) · c · 1{pi = basei and kj = {1, 1, . . . , 1}}
A non-ToM agent can have only limited success in this environment. Without reasoning about its own knowledge (i.e. without zeroth order ToM), it does not know when to use a recharge base. Moreover, without knowledge about other agent’s knowledge (i.e. without first order ToM) it is not possibly to know which agents possess the information pieces it is lacking. Even if it accidentally hears information, a non-first-order ToM agent cannot efficiently decide what to utter in response to maximize its reward. Higher order ToM is also often needed in SymmToM, as we will discuss further in Section 7.3.
Even though we only discussed a collaborative task for SymmToM, it can easily be extended for competitive tasks. Moreover, all our models are also designed to work under competitive settings. SymmToM satisfies the desiderata we laid out in the previous section, as we will detail below:
Symmetric action space. As defined in Eq. 1,Ai = Aj for all i, j. Only a subset may be available at a time since agents cannot step outside the grid, speak a piece they have not heard, or move if they would collide with another agent in the same cell, but they all share the same action space.
Imperfect information. Messages sent by agents outside of the hearing range will not be heard. For example, in Fig. 2a green sends a message but it is not heard by anyone, since it is outside of red’s and blue’s range. Hearing ranges are guaranteed not to cover the whole grid, since h < 2w−1.
Observation of others. Agents have perfect vision of the grid, even if they cannot hear what was said outside of their hearing range. Hence, an agent may see that two agents were in range of each
other, and thus probably interacted, but not hear what was communicated. An example of this can be seen in Fig. 2a, where green observes blue and red interacting without hearing what was uttered.
The uncertainty in the observation also differentiates SymmToM from prior work: to solve the task perfectly, an agent needs to assess the probability that other agents outside its hearing range shared a specific piece of information to avoid repetition. This estimation may be performed using the knowledge of what each agent knows (first order ToM), the perceived knowledge of each of the agents in the interaction (second order ToM), as well as higher order ToM.
Information-seeking behavior Rewards are explicitly given for hearing and sharing novel information, guaranteeing information-seeking is crucial in SymmToM. Recharge bases ensure that the optimal solution is not for all agents to accumulate in the same spot and quickly share all the information available; and that the information tracking required is more complex than accumulating past events. Conceptually, with recharge bases we introduce an explicit and observable forgetting mechanism. As discussed in Section 3, this allows for perpetual information seeking and requires information staleness estimation. An example of successful recharge base use is shown in Fig. 2b.
5 BASELINE LEARNING ALGORITHMS AND OTHER BOUNDS
To learn a policy for acting in the multi-agent SymmToM environment, it is a good strategy to use a multi-agent reinforcement learning algorithm. We use MADDPG (Lowe et al., 2017), a wellknown multi-agent actor-critic framework with centralized planning and decentralized execution, to counter the non-stationarity nature of multi-agent settings. In MADDPG, each actor policy receives its observation space as input, and outputs the probability of taking each action.
Notably, actors in MADDPG have no mechanism for remembering past turns. This is a critical issue in SymmToM, as agents cannot remember which pieces they currently know, which ones they shared and to whom, and other witnessed interactions. To mitigate this, it is necessary to add a recurrence mechanism to carry over information from past turns. One option would be to modify the agent policy using a recurrent network like an LSTM, as RMADDPG (Wang et al., 2020) does.
Perfect Information, Heuristic and Lower Bound Models Performance is difficult to interpret without simpler baselines. As a lower bound model we use the original MADDPG, that since it does not have recurrence embedded, should perform worse or equal to any of the modifications described above. We also include an oracle model (MADDPG-Oracle), that does not require theory of mind since it receives the current knowledgeK for all agents in its observation space. The performance of MADDPG-Oracle may not always be achieved, as there could be unobserved communication with multiple situations happening with equal probability. Moreover, as the number of agents and size of the grid increases, current reinforcement learning models may not be able to find an optimal spatial exploration policy; they may also not be capable of inferring the optimal piece of information to communicate in larger settings. In these cases, MADDPG-Oracle may not perform optimally, so we also include a baseline with heuristic agents to compare performance.
Heuristic agents will always move to the center of the board and communicate round-robin all the information pieces they know until they have all the available knowledge. Then, they will move efficiently to their recharge base and come back to the center of the grid, where the process restarts. We must mention that this heuristic is not necessarily the perfect policy, but it will serve as a baseline to note settings where current MARL models fail even with perfect information. Qualitatively, smaller settings have shown to approximately follow a policy like the heuristic just described.
6 DIRECT MODELING OF SYMMETRIC THEORY OF MIND
In contrast to RMADDPG (Wang et al., 2020), we specifically design algorithms for our environment to maximize performance. Intuitively, our model computes a matrix, K ∈ {0, 1}c×a, that reflects the information pieces known by each agent from the perspective of the agent being modeled: Kij reflects if the agent being modeled believes that agent j knows i. K is updated every turn and used as input of the following turn of the agent, obtaining the desired recurrent behavior. K is also concatenated to the usual observation space, to be processed by a two-layer ReLU MLP and
obtain the probability distributions for speech and movement, as in the original MADDPG. There are several ways to approximate K. It is important to note that each agent can only partially observe communication, and therefore it is impossible to perfectly compute K deterministically.
The current knowledge is comprised of first-hand information (the initial knowledge of every agent, F , publicly available) and second-hand information. Second-hand information may have been heard this turn (S, whose computation will be discussed below) or in previous turns (captured in the K received from the previous turn, noted K(t−1)). Additionally, knowledge may be forgotten when an agent steps on a base having all the information pieces. To express this, we precompute a vector B ∈ {0, 1}a that reflects whether each agent is currently on its base; and a vector E ∈ {0, 1}a that determines if an agent is entitled to use their recharge base: Ej = 1∑
i Kij=c for all j ∈ {1, . . . , a}.
We are then able to compute K as follows:
K (t) ij = (Fij or Sij = 1 or K (t−1) ij = 1) and not (Bij and Eij) (2)
F , K(t−1), and B are given as input, but we have not yet discussed the computation of the secondhand information S. S often cannot be deterministically computed, since our setting is partially observable. We will identify three behaviors and then compute S as the sum of the three:
S = S[0] + S[1] + S[2]
For simplicity, we will assume from now on that we are modeling agent k. S[0] will symbolize the implications of the information spoken by agent k: if agent k speaks a piece of information, they thus know that every agent in its hearing range must have heard it (first order ToM). S[1] will symbolize the implications of information heard by k: this includes updating k’s known information (zeroth order ToM) and the information of every agent that is also in hearing range of the speaker heard by k. S[2] will symbolize the estimation of information pieces communicated between agents that are out of k’s hearing range. Since we assume perfect vision, k will be able to see if two agents are in range of each other, but not hear what they communicate (if they do at all).
S[0] and S[1] can be deterministically computed. To do so, it is key to note that every actor knows the set of actions A ∈ {0, 1}c×a performed by each agent last turn, given that those actions were performed in their hearing range. Moreover, each agent knows which agents are in its range, as they all have perfect vision. We precompute H ∈ {0, 1}a×a to denote if two given agents are in range.
Then, S[0]ij = 1 if and only if information piece i was said by k, and agents k and j are in hearing range of each other. More formally,
S [0] ij = Aik ·Hkj
S [1] ij = 1 if and only if agent k (the actor we are modeling) heard some agent ` speaking information piece i, and agent j is also in range of agent `. Note that agent k does not need to be in hearing range of agent j. More precisely,
S [1] ij = Ai` ·Hk` ·H`j , for any agent `
S[2] –the interactions between agents not in hearing range of the agent we are modeling– can be estimated in different ways. A conservative approach would be to not estimate interactions we do not witness (S[2] = 0, which we will call MADDPG-ConservativeEncounter (MADDPG-CE)); and another approach would be to assume that every interaction we do not witness results in sharing a piece of information that will maximize the rewards in that immediate turn. We will call this last approach MADDPG-GreedyEncounter (MADDPG-GE). MADDPG-GE assumes agents play optimally, but does not necessarily know all the known information and that could lead to a wrong prediction. This is particularly true during training, as agents may not behave optimally. The computation of S[2] for MADDPG-GE is as follows.
First, we predict the information piece U` that agent ` uttered. MADDPG-GE predicts U` will be the piece that the least number of agents in range know, as it will maximize immediate reward:
U` = arg min i ∑ j (Kij and Hj`) ∈ {1, . . . c}
With this prediction, agent j will know information i if at least one agent in its range said it:
S [2] ij = 1 if exists ` 6= k such that U` = j and Hj` and j 6= k else 0
MADDPG-EstimatedEncounter (MADDPG-EE) MADDPG-CE and MADDPG-GE are two paths to information sharing estimation, but in none of them do we estimate the probability of an agent knowing a specific piece of information. In MADDPG-EstimatedEncounter (MADDPG-EE), known information of other agents is not binary, i.e. Kij ∈ [0, 1]. This added flexibility can avoid making predictions of shared information based upon unreliable information.
MADDPG-EE estimates the probability that an agent j uttered each piece of information (Uj ∈ Rc) by providing the current information of all agents in its range to an MLP:
Uj = softmax(f(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f an MLP
Then, the probability of having heard a specific piece of information will be the complement of not having heard it, which in turn means that none of the agents in range said it. More formally,
S [2] ij = 1− ∏ `,Hj`=1 1− U`
Since MADDPG-EE requires functions to be differentiable, we use a differential approximation of Eq. 2. A pseudocode of MADDPG-EE’s implementation can be found in Section A.4. MADDPGEE solely focuses on first order ToM, and we leave to future work modeling with second order ToM. The structure of the model would be similar but with an order of magnitude more parameters.
7 EXPERIMENTS
7.1 EXPERIMENTAL SETTINGS
In this section, we compare the different algorithms explained in the previous section. The observation space will be constituted of a processed version of the last turn in the episode, to keep the input size controlled. More precisely, the observation space is composed of: the position of all agents, all recharge bases, the current direction each agent is moving towards to and what they communicated in the last turn, the presence of a wall in each of the immediate surroundings, and every agents’ first-hand information. First-hand information is publicly available in our experiments to moderate the difficulty of the setup, but this constraint could also be removed. This simple setting is still partially observable, since the agents cannot hear interactions outside of their hearing range.
We use the reward as our main evaluation metric. This metric indirectly evaluates ToM capabilities, since information-seeking is at the core of SymmToM. We train through 60000 episodes, and with 7 random seeds to account for high variances in the rewards obtained. Our policies are parametrized by a two-layer ReLU MLP with 64 units per layer, as in the original MADDPG (Lowe et al., 2017). MADDPG-EE’s function f is also a two-layer ReLU MLP with 64 units per layer.
We test two board sizes (w = 6, and w = 12), two numbers of agents (a = 3 and a = 4), and three quantities of information pieces (c = a, c = 2a c = 3a). The length of each episode is set to 5w. More detail about design decisions can be found in Section A.5.
7.2 MAIN RESULTS
As we can observe in Table 1, there is a significant difference in performance between MADDPGOracle and MADDPG (MADDPG-Oracle is 123% better on average): this confirms that developing ToM and recurrence is vital to perform successfully in SymmToM. MADDPG-Oracle is often not an upper bound: when c > a, the heuristic performs better (101% on average). This shows that even with perfect information, it can be difficult to learn the optimal policy using MADDPG. Moreover, models with a recurrence mechanism perform significantly better than MADDPG (61% better on average), also showing that remembering past information gives a notable advantage. As expected, having recurrent models tailored to our problem resulted in better performance than a general LSTM recurrence (RMADDPG). The performance of the best of the tailored models (MADDPGCE, MADDPG-GE, MADDPG-EE) was 42% better on average than plain RMADDPG. LSTM was able to surpass the best of the tailored models only for a = 3, w = 12, c = 3a.
Increasing c generally decreases global rewards for learned agents (on average, rewards for c = 2a are 75.54% of those for c = a, and rewards for c = 3a are 75.66% of those for c = a).
This suggests that probabilistic decisions are harder to learn, or impossible to successfully navigate when several events are equally likely. MADDPG-EE did not show improvements over the other learned agents, and in some cases decreased its performance more heavily than other learned agents (e.g. w = 6, c = 3a). MADDPG-EE uses an MLP in its definition of S[2], which gives more flexibility but also implies a more complex function to learn. We leave to future work to explore other probabilistic agents, but the significant difference in performance between all of the learned models and the highest performing ones (MADDPG-Oracle and the heuristic) shows there is ample space for improvement in this task, and hence proves SymmToM to be a simple yet unsolved benchmark.
Increasing a results in a 10% reduction of performance on average for learned models. Nonetheless, the heuristic improved its rewards by an average of 46%, given the larger opportunities for rewards when including an additional listener. Overall, this implies that increasing a also makes the setup significantly more difficult. Finally, increasing w did not have a conclusive result: for a = 4 it consistently decreased performance in ∼17%, but for a = 3 we saw an improvement of 16% and 61% for c = a and c = 2a respectively.
In sum, modifying c and a provides an easy way of making a setting more difficult without introducing additional rules.
7.3 DISCUSSION
In the past section we analyzed results using the metric of episode rewards. Although the rewards in SymmToM are designed to correlate with information seeking and knowledge state of the agent themselves and others, they do not show explicitly if agents are exhibiting theory of mind. To do so more directly, we develop two categories of possible analyses: scenarios specifically designed to test theory of mind, and post-hoc analyses of episodes.
A classic example of a scenario specifically designed to test ToM behavior is the Sally-Anne task (Wimmer & Perner, 1983). This false belief task, originally designed for children, aims to test if a passive observer can answer questions about the beliefs of another person, in situations where that belief may not match reality. If we were to use it for machine ToM, we could repeat the experiment and ask an agent to predict the position of an object varying the underlying conditions. This test is feasible because there is only one agent with freedom of action, which ensures that desired conditions are met every time. Testing becomes unfeasible when giving multiple agents freedom of action, as constraints planned in test design may be broken by a collective drift from the strategy thought by the designer. Testing becomes easier if we allow for controlling all agents but one, as shown in Fig. 3. Other tests besides the ones shown may be designed. In particular, in Fig. 3d we show an example of probabilistic ToM where two communicative events are equally likely, but one could modify this scenario to have different probabilities and test the expected value of the turns until red successfully shares an information piece. One could also design retroactive deduction tests: for example, in Fig. 3d if red communicates and receives no reward, it can deduce that green had
received that information from blue. If there had been another agent (let’s say, a yellow agent) in range of blue when it spoke to green, the red agent could also update its knowledge about yellow. Results for the tests proposed in Figure 3 are detailed in Appendix A.1.
Post-hoc analysis also has its challenges in multi-agent settings, even in the most direct cases. Thanks to our reward shaping, using recharge bases is always the optimal move when an agent has all the information available: an agent will have a reward of (n−1)c for using the base, whereas it can only gain up to n− 1 + c− 1 per turn if it decides not to use it. Even in this case, small delays in using the base may occur, for example if the agent can gather additional rewards on its path to the base. More generally, having multiple agents makes a specific behaviors attributable to any of the several events happening at once, or a combination of them.
Even though it may be difficult to establish causality when observing single episodes, we developed metrics that comparatively show which models are using specific features of the environment better than others. Examples of metrics are unsuccessful recharge base usage count; number of times an agent shares an information piece everyone in its range already knows when having better alternatives; number of times an agent moves away from every agent when not having all the information pieces available; among others. See Appendix A.2 for detailed description and results. Reward can also be understood as a post-hoc metric with a more indirect intepretation.
Post-hoc analyses of single episodes can also be blurred by emergent communication. Because agents were trained together, they may develop special meaning assignment to particular physical movements or messages. Even though qualitatively this does not seem to be the case for the models presented in the paper, tests should also account for future developments. This also implies that one should not overinterpret small differences in the metrics described in the paragraph above.
8 CONCLUSIONS AND FUTURE WORK
We defined a framework to analyze machine theory of mind in a multi-agent symmetric setting, a more realistic setup than the tasks currently used in the community. Based on the four properties needed for symmetric theory of mind to arise, we provided a simplified setup on which to test the problem, and we showed we can easily increase difficulty by growing the number of agents or communication pieces. Our main goal in this work was not to solve symmetric theory of mind, but rather to give a starting point to explore more complex models in this area. We showed that even with this minimal set of rules, SymmToM proves algorithmically difficult for current multi-agent deep reinforcement learning models, even when tailoring them to our specific task. We leave to
future work to develop models that handle second-order theory of mind and beyond, and models that periodically reevaluate past turns to make new deductions with information gained a posteriori (i.e., models that pass retroactive deduction tests). Another interesting direction would be to replace the information pieces with constrained natural language: communication sharing in our task is binary, whereas in language there is flexibility to communicate different subsets of a knowledge base using a single sentence. We will make our codebase public upon publication, that also includes additional observation space restrictions to increase difficulty.
ETHICS AND REPRODUCIBILITY STATEMENT
Theory of mind research at its core deals with understanding the mental states of other individuals. In the present work we focused on collaborative machine theory of mind, which entails interactions only between artificial agents, and only in scenarios where every party involved has the same incentive structure. This design decision is intentional. Other approaches to theory of mind research could include scenarios with human-agent interaction, which could potentially lead to agents learning to model human players’ mental states. This is not concerning per se, but in scenarios where players do not have the same incentive structure, it could lead to agents learning to deceive other players (potentially human players). The state of the art in machine theory of mind is still far away from these capabilities, but we believe that experiment design choices should always take this matter into account.
Regarding reproducibility, we will make all code public upon acceptance, including the environment and the models’ code. The exact set of parameters used for training will also be shared. Multi-agent reinforcement learning models do have high variance, so models should be run several times to see similar confidence intervals as the ones shown in Fig. 5 and Fig. 6.
A APPENDIX
A.1 AD-HOC THEORY OF MIND TESTS
We test on the four examples shown in Figure 3, adapting the examples to fit one of the grid sizes we already experimented on. For the tests described in Figure 3a and Figure 3b, we test two different grid sizes: w = 6 and w = 12. For the tests described in Figure 3c and Figure 3d we only test w = 12 and w = 6 respectively. Image depictions of the exact test configurations can be seen in Figure 4.
We measure three metrics: average success rate, average failure rate, and ratio of average turns to succeed vs. optimum (RATSO). Note that Average Success Rate and Average Failure Rate do not necessarily sum 1 since these two metrics only include trials where the agent reached any of the two proposed outcomes. If, for example, the agent never moved from the starting point, the trial would not be counted positively towards Avg. Success Rate or Avg. Failure Rate. In addition, ratio of average turns to succeed vs. optimum (RATSO) is the ratio between the average turns it took to succeed in successful trials, and the optimum number of turns to succeed in a specific trial.
For the tests in Figure 4a, 4b, 4d, and 4e, the trial ends when the red agent reaches the hearing range of one of the two possible target agents. The test depicted in Figure 4f is a pass/fail test: if red moves suboptimally at any point before meeting blue, the trial is declared as failed. This makes it a particularly difficult test to pass at random. Because of the nature of this second order ToM test, we only report the average success rate. Finally, for the probabilistic ToM test we want to measure how fast can red communicate all the information it has to green. The optimal number of turns is 1.5 (as discribed in Figure 3), and because of the nature of this test we will only report RATSO.
All results can be found in Table 2. As expected, a larger average success rate correlates with higher reward models (MADDPG-CE and MADDPG-GE are the best models), suggesting that the
reward is a valuable overall metric. The low average success rates across all tests show there is significant room to improve in this benchmark. Success rate drops sharply when increasing the grid size, suggesting larger grids impose more difficult training settings. In this analysis, we used models that were trained specifically for each parameter combination.
Results for Oracle were omitted since some tests assumed no knowledge about communication (e.g. even though agents in Figure 4b do not communicate during the test, the test was designed to test if the red agent assumed them to be).
As we emphasized in the main text, many more tests can be proposed. The code base we will release allows for easily adding new tests to the suite.
A.2 POST-HOC ANALYSES
We developed several post hoc metrics to analyze specific aspects of our models.
• Unsuccessful recharge base usage rate: Average times per episode an agent steps on its recharge base without having all the information available (i.e. wrong usage of the recharge base). Note that an agent may step on its base just because it is on the shortest path to another cell. Therefore, a perfect theory of mind agent will likely not have zero on this score; but generally, lower is better. See results in Table 3.
• Wrong communication piece selection count: Average times per episode an agent attempted to say an information they currently do not possess. In these cases, no communication happens. Lower is better. See results in Table 4.
• Useless communication piece selection count: Average times per episode an agent communicated an information piece that everyone in its hearing range already knew, when having a piece of information that at least one agent in its range did not know. Lower is better. See results in Table 5.
• Useless movement: Average times per episode an agent moves away from every agent that does not have the exact same information it has, given that the agent does not currently possess all the information available. This means that the agent is moving away from any possible valuable interaction. Lower is better. See results in Table 6.
All metrics are normalized by number of agents (i.e., they show the score for a single agent). This allows for better comparison between a = 3 and a = 4 settings.
RMADDPG had the worst scores for unsuccessful recharge base use rate and useless communication piece selection count. RMADDPG scored 43% more than Oracle for unsuccessful base usage on average, and 63% more than Oracle on average for usage of a useless communication piece. The best tailored models (MADDPG-CE and MADDPG-GE) performed similarly to Oracle on average for these two metrics. In contrast, MADDPG-CE and MADDPG-GE performed significantly worse than Oracle for the wrong communication piece selection count (48% and 56% more than Oracle on average). This suggests that all models may be making wrong decisions, but RMADDPG is biased towards communicating redundant information whereas MADDPG-CE and MADDPG-EE tend towards not communicating at all (the true effect of trying to communicate something they are not allowed). Further analysis is needed to truly understand if these apparently wrong behaviors were done in turns where the agent had all the information available to make a better move, or if this is their default when they believe they have nothing of value to communicate. A priori RMADDPG bias seems more principled, but it still showed worse performance overall.
No learned model performed particularly better in the useless movement metric (average differences in performance were less than 10%), suggesting that they perform pointless movements in similar frequencies. It is important not to overinterpret small differences in these metrics. For example, a useless movement may be a signal of emergent communication. Furthermore, an agent may communicate something suboptimal for its immediate reward but this move may not affect its expected reward for the trial.
A.3 TRAINING CURVES FOR THREE RANDOM SEEDS COMBINED
A.4 PSEUDOCODE OF MADDPG-EE
Algorithm 1 Actor implementation of MADDPG-EE, approximating K to make it differentiable. Input: observation, A ∈ {0, 1}c×a, agent idx ∈ 0, . . . a− 1, F ∈ {0, 1}c×a, K ∈ {0, 1}c×a, B ∈ {0, 1}a, H ∈ {0, 1}a×a
Make agents not be in their own hearing range, to avoid talking to themselves from the previous turn. This would be problematic when using recharge bases.
H = H − 1a×a
Compute S[0], all the heard information spoken by agent idx: S[0] = copy
a times A[:, agent idx] copy c times H[agent idx, :]
Compute S[1], all heard information by agent idx, spoken by all agents: S[1] = (A copy
C times H[agent idx, :]) ·H
Compute S[2], an estimation of information pieces communicated between agents that were out of agent idx’s hearing range:
Uj = softmax(f1(K1j , . . . ,Kcj , {K1`, . . . ,Kc` for all ` where Hj`})), with f1 an MLP S [2] ij = 1− ∏ `,Hj`=1 1− U` for all i ∈ {0, . . . , c− 1} and j ∈ {0, . . . , a− 1}
S = S[0] + S[1] + S[2]
Ei = 1sum(K[:,i])=c ∈ {0, 1}a for all i ∈ {0, . . . , c− 1}
K = step ( F · 100 +K + S − 2 · copy
c times (B E) ) return softmax(f2([observation K])), K where f2 is an MLP
A.5 EXPERIMENT DESIGN DECISIONS AND CONSIDERATIONS ABOUT RESULT PRESENTATION
All our experiments are with h = 1: only the immediate neighbors of an agent will hear what they communicate.
We tested with a = 3 and a = 4 and not larger numbers of agents, as the training time increases quadratically with a; also, the intrinsic difficulty of larger setup –even with perfect information– would possibly degrade performance to the point of making it impossible to compare models.
Running experiments with the same number of turns for every setting would imply that agents can move less in combinations with larger values of w, hence the need of making it proportional to the size of the grid. Since the duration of the experiment is directly proportional to the length of the episodes, we settled on a small multiplier. 5w allows agents to move to each edge of the grid and back to the center. A similar decision is required when choosing c: having a constant number of information pieces when increasing the number of agents would make the problem easier, as each agent would have fewer options of first-hand information pieces.
We decided to evaluate running additional episodes over the best checkpoints of each model because there was high variance for some runs, and drops in performance after achieving the highest rewards. Those results are the base of the discussion and can be seen in Table 1. Still, we share the training curves so that the reader can observe these behaviors in Fig. 5 and Fig. 6.
We used the same hyperparameters as the ones used in MADDPG, except with a reduced learning rate and tau (lr = 0.001 and τ = 0.005). We used the same parameters for all our experiments. | 1. What is the main contribution of the paper regarding the Theory of Mind (ToM) capabilities?
2. What are the strengths of the proposed SymmToM environment, especially in its simplicity and ability to demonstrate room for improvement?
3. What are some potential weaknesses or limitations of the SymmToM environment, such as the simplified parametrization of belief/knowledge states and the challenge of coordination?
4. How might direct analysis of baseline performance and human behavior on the task help clarify the effectiveness of SymmToM as a benchmark for ToM? | Summary Of The Paper
Review | Summary Of The Paper
This paper introduces SymmToM, an environment aimed at benchmarking agents' Theory of Mind (ToM) capabilities. In motivating the construction of this environment, it outlines criteria.
It first defines ToM agents in terms of knowledge/prediction of each agent's own hidden state,
It then defines "Symmetric Theory of Mind" environments as having certain properties: symmetric action space, imperfect information, observation of others, and information-seeking behavior (really, the task-relevance of it). This is motivated by attempts to move away from more traditional ToM situations, e.g. where one agent is a passive observer or agents have one of several different designated roles.
With this, it defines the environment, a simple grid environment with several modifiable parameters.
It then defines several reasonable baselines: an oracle (that knows info states of all agents but must learn what to do with this via MADDPG -- so, not necessarily an upper bound), an heuristic (consisting of a simple coordinated strategy between agents that is also not an upper bound, but represents decent coordinative behavior), and several hand-crafted agents that explicitly track knowledge estimates (these have a great deal of domain-specific architecture).
It then tests each of these baselines on several versions of the environment (varying the aforementioned parameters). In all but the simplest versions, the heuristic method dwarfs the performance of everything else (including the oracle), thereby demonstrating room to improve.
Review
Strengths:
The topic covered in the paper is extremely timely: ToM, and MARL in general (specifically centralized training with decentralized execution (CTDE) MARL such as MADDPG), are topics receiving a great deal of interest, with many workshops and conference publications related to this work.
The paper makes a compelling case for SymmToM with both the motivating discussion and the results. The definitions of "Theory-of-Mind Agents" and "Symmetric Theory-of-Mind environments," while perhaps having some debatable components (e.g. is it important that each agent always get some information about every other agent, as seems to be implied?), are reasonable and in my mind cover many interesting cases. The definition of SymmToM follows cleanly from this. It is a very simple environment that seems to contain minimal aspects critical for it to count as a Symmetric Theory-of-Mind environment. Finally, the results make a compelling case that this is a useful benchmark. In particular, by tuning the number of agents and info pieces, we easily reach a regime in which Heuristic substantially dominates several reasonable baselines.
These baselines are extremely hand-crafted for the environment, which would detract if the point of the paper were to demonstrate their utility. But that is not the point: the point is that (1) there is a substantial delta between these and Heuristic, and (2) that they are so hand-crafted that I cannot think of a way in which more generic baselines with more sophisticated methods could reasonably be expected to do better. This means that there is demonstrably considerable room to improve.
The paper is well-written and easy to read, with useful figures.
Weaknesses:
It is important to think critically about whether SymmToM provides a robust test for ToM. Intuitively, several things are challenging about this environment: the need to estimate the belief/knowledge states of others, coordination (generating mutually beneficial behaviors between agents -- as Heuristic does in a very nontrivial way), and long-range planning (collecting all these info pieces before going to the recharge bases.
With regard to the need to estimate the belief/knowledge states of others, one critique is that this is a drastically simplified parametrization of belief/knowledge. That is, belief/knowledge is not about embodied/spacial/actionable knowledge about the environment, but rather it is abstracted away into discrete info pieces and thus lacks real-world nuance (e.g. one might try to estimate the location of something and know roughly where it is, and roughly where others think it is, but not have an exact sense of either; or one might be attempting to estimate others' goals). It is thus a bit unclear just how much this is capturing in our intuitive, psychological notion of theory of mind.
Coordination is a massive challenge in CTDE MARL. In a sense, the Heuristic baseline largely solves a coordination problem: with this sort of simple behavior, ToM estimation and long-term planning is seems relatively easy. What might be difficult is not necessarily ToM estimation but rather simply achieving this level of coordination between agents. Failing massive amounts of coordination, planning in this environment seems to be very challenging as well, easily being difficult for long-range planning algorithms simply because of the number of subgoals one needs to achieve in order to get the recharge reward.
A few things (not necessarily in the scope of this paper) could help clarify this.
(1) Perhaps easiest, a direct analysis of the baselines' capacity to estimate K, as well as a ceiling for K estimation, would be useful. If it turns out that the models are already estimating K well, then this is perhaps more a benchmark for other challenges, not ToM. (2) It would be helpful to see what humans do on this task. One would be to see how often humans, playing together for a while, converge to simple, effective cooperative behaviors such as Heuristic. (3) The other would be to see how well a human does when the other agents are not particularly coordinated. One possibility is that the amount of things needed to be kept track of in the higher a, c environments simply make for not particularly human-doable tasks. |
ICLR | Title
Learning Semantic Similarities for Prototypical Classifiers
Abstract
Recent metric learning approaches parametrize semantic similarity measures through the use of an encoder trained along with a similarity model, which operates over pairs of representations. We extend such a setting and enable its use in tasks including multi-class classification in order to tackle known issues observed in standard classifiers such as their lack of robustness to out-of-distribution data. We do so by further learning a set of class prototypes, each one representing a particular class. Training is carried out so that each encoded example is pushed towards the prototype corresponding to its class, and test instances are assigned to the class corresponding to the prototype they are closest to. We thus provide empirical evidence showing the proposed setting is able to match object recognition performance of standard classifiers on common benchmarks, while presenting much improved robustness to adversarial examples and distribution shifts. We further show such a model is effective for tasks other than classification, including those requiring pairwise comparisons such as verification and retrieval. Finally, we discuss a simple scheme for few-shot learning of new classes where only the set of prototypes needs to be updated, yielding competitive performance.
1 INTRODUCTION
Despite the performance boost observed by multi-class classifiers based on neural networks compared to alternative approaches, as evidenced since (Krizhevsky et al., 2012), it is now well-known that such a modeling framework suffers from shortcomings that limit its potential deployment in real-world applications. We highlight below some of such limitations:
• A worth mentioning threat regarding the use of current classifiers is the existence of adversarial examples, as discussed originally by Szegedy et al. (2013) and Goodfellow et al. (2014). In fact, it is a known property of neural networks that it is possible to impose large variations in their outputs by slightly changing their inputs. Attackers might then exploit such property to fool deployed models into making certain decisions they might benefit from. Several methods have been proposed in recent literature in order to fool state-of-theart classifiers with changes to the input that are imperceptible to humans.
• The lack of robustness to distribution shifts across train and test data is a further issue that appears in practice and is known to affect performance of current classifiers. For example, an object recognizer trained on natural images will likely observe a performance degradation once test data consists of drawings from the same classes, for instance. Such a shift across train and test data sources is a direct violation of the i.i.d. assumption on top of which most of the supervised learning generalization guarantees are built within the empirical risk minimization framework. Recent literature in domain adaptation has introduced more general settings relaxing the i.i.d. assumption to some extent to help coping with situations found in practice. However, there’s still much room for improvement, as most approaches require data from a particular target data distribution. This requirement is still unpractical given that a large number of possible unseen test conditions might appear for a deployed model (Albuquerque et al., 2019).
• Yet another limitation is the case of small data samples since large classifiers in terms of parameter count require large amounts of data so as to achieve high performance. In
several practical situations, however, collecting (and labeling) large datasets is prohibitively costly. Moreover, standard classifiers are bounded to the label set they were presented to during training, while in practice one would ideally be able to extend a trained classifier to predict new classes observed after training. Transfer learning schemes thus appeared as a natural strategy to overcome both issues by enabling fast adaptation once data from novel sources is made available so that: (i)-one can leverage a pretrained model on large datasets and adapt it to the data of interest which is scarce; (ii)-a classifier does not need to be trained from scratch whenever new classes are taken into account. As such, devising approaches enabling inexpensive adaptation of trained classifiers to new data became a relevant research direction, often referred to as few-shot learning, yielding approaches such as meta-learning or learning to learn (Schmidhuber, 1987; Bengio et al., 1992; Ravi & Larochelle, 2016; Finn et al., 2017), as well as geometric methods (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018).
In this contribution, our main goal is then to develop classification strategies that address to some extent the issues discussed above. As such, the research question we pose is whether one can define multi-class classification approaches which are more robust against adversaries and distribution shifts while supporting adaptation to novel classes, observed after training in small samples. We thus tackle such problem using approaches which leverage both the set of methods commonly grouped under the term metric learning, as well as the geometric approaches discussed above for few-shotclassification; prototypical networks in particular (Snell et al., 2017). In further detail, we focus on metric learning settings where both an encoder and a similarity or distance models are trained jointly (Koch et al., 2015; Garcia & Vogiatzis, 2019; Monteiro et al., 2020), but augment such setting with a set of class prototypes used in order to assign points to classes.
Our method thus comprises three main components: (i)-an encoder that embeds data into a lower dimensional space; (ii)-a similarity model which maps a pair of concatenated representations into a similarity score, and finally (iii)-a list of class prototypes in which case each one summarizes a whole class into a vector in the embedding space. Based on said components, we can then devise different inference mechanisms depending on the task at hand. For the case of multi-class classification, for instance, at test time, one can predict the class of a particular test instance through measuring its similarity against each prototype and assigning it to the class whose prototype it is most similar to. Similarly, tasks relying on pairwise comparisons can be performed such as verification, i.e. comparing two data instances and determining whether they belong to the same class, or retrieval, i.e. comparing a test instance against a gallery and determining the k elements in the gallery the considered test instance is most similar to. Moreover, each time new classes appear, adapting the model consists of updating the list of prototypes only, while keeping the encoder and similarity unchanged, thus enabling fast adaptation and avoiding issues such as forgetting past classes or overfitting to the new ones. In summary, our contributions are as follows:
1. We introduce an alternative multi-class classification approach based on metric learning methods, which can match the performance of standard classifiers, while offering improved robustness against adversaries and distribution shifts with respect to observed train data.
2. The proposed setting is further shown to perform well in tasks involving pairwise comparison such as verification and image retrieval, thus providing an approach that allows a single model to be used across multiple tasks.
3. The proposed approach supports the inclusion of new classes appearing posterior to training, which we do by simply repartitioning the space using small data samples. We observed doing so yields a simple yet competitive mechanism for few-shot classification.
2 BACKGROUND AND RELATED WORK
Metric learning approaches are concerned with representing data in a metric space where semantic properties can be inferred using distances. The literature in this field can be classified in terms of two main groups of approaches trying to do so under two distinct settings, and we will refer to those as distance metric learning, introduced originally by Xing et al. (2003), and deep metric learning represented most notably by siamese networks (Bromley et al., 1994; Chopra et al., 2005; Hadsell et al., 2006). In the case of the former, one learns a so-called Mahalanobis distance which, given
x, y 2 Rd, will have the form: p (x y)|W (x y), where W 2 Rd⇥d is positive semidefinite. Learning is designed so that p (x y)|W (x y) will be small for semantically close x and y. Several extensions of the setting introduced by Xing et al. (2003) were proposed (Shalev-Shwartz et al., 2004; Globerson & Roweis, 2006; Weinberger & Saul, 2009; Ying & Li, 2012). For the case of deep metric learning, on the other hand, one’s interest is to learn a non-linear encoder E : RD 7! Rd, and often D d, so that some standard distance such as the one based on the L2 norm ||E(x) E(y)||2 will be small for semantically close x and y. While the two discussed settings are equivalent in their final goal, their focus is not the same: distance metric learning approaches focus on learning a semantically meaningful distance measure while deep metric learning methods focus on projecting the data onto a space where standard distances are meaningful.
A relatively recent research direction consists in the combination of the above described settings, i.e. jointly training an encoder along with a distance/similarity model. This is the case in (Koch et al., 2015) where a symmetric model was used to map the absolute difference of a pair of representations into a similarity score. In (Garcia & Vogiatzis, 2019), training of a distance/similarity model is done by imitation learning of cosine similarities measured between representations, which authors claim to simplify training compared to the direct use of cosine scores. In (Pitis et al., 2020), authors focus on distance models supporting asymmetric properties of the data, while still satisfying the triangle inequality. Learned Bregman divergences were evaluated by Cilingir et al. (2020), and completely unconstrained similarity models, in the sense that any property such as symmetry is imposed in the learned distance, were proposed in (Monteiro et al., 2020) for verification tasks. Learnable similarities parametrized by neural networks were further employed in (Wenliang et al., 2019; Liu et al., 2020) for the implementation of learned kernels, used to perform MMD-based (Gretton et al., 2012) 2-sample tests. For the case of few-shot learning under geometric approaches, so-called prototypical networks (Snell et al., 2017) follow a similar idea to that of metric learning in the sense that training consists of building a metric space where distances are indicative of properties of the data. However, that setting introduced the idea of partitioning the said space using class prototypes, i.e. a set of vectors representing each class, thus enabling its use to classification tasks since one can assign a test instance to the class corresponding to its closest prototype. That approach was also used under few-shot classification settings, in which case a new partitioning of the space is computed once small samples corresponding to new classes are presented to the model. We thus propose a strategy to extend the setting in (Monteiro et al., 2020) and include a partitioning with prototypes in the learned pseudo metric space so that the final model can be used to perform tasks such as multi-class and few-shot classification, while still supporting tasks involving pairwise comparisons.
3 SOLVING DIFFERENT TASKS THROUGH LEARNED SIMILARITIES
Assume (x, y) represents instances from X ⇥ Y , where X ✓ RD is the data space while Y is the space of labels, which will be always a discrete set in the cases considered herein. We thus consider the setting where the following components are to be learned: an encoder E : RD 7! Rd, a similarity model S : Rd ⇥ Rd 7! R, and a set of prototypes C 2 R|Y|⇥d. As it will be further discussed, such three components can be then used to perform different types of inference regarding properties of underlying data, and thus solve different tasks.
3.1 TRAINING
Training is carried out to enforce the following properties: i-the similarity as measured between a particular example and the prototype corresponding to its class labels should be high; ii-the similarities measured between examples from the same class should be high, while examples from different classes should yield a low similarity score. We thus design training objectives aimed at enforcing such properties. For the first property, we consider a training sample of size m and employ the standard multi-class cross-entropy criterion, but use the similarity measured between a training instance and each prototype as the set of logits as opposed to output layers defined by an affine transformation, commonly employed in standard classifiers, i.e. we perform maximum likelihood estimation on the categorical conditional distribution defined by:
P (Y|x0) = softmax(S(E(x0), C1:|Y|)), (1)
where Ci, i 2 [|Y|], indicates the prototype corresponding to class i. The corresponding training loss, denoted Lclass, will be then given by:
Lclass = 1
m
mX
i=1
log e S(E(xi),Cyi )
P|Y| j=1 e S(E(xi),Cj) , (2)
where xi and yi indicate the i-th training example. In order for the learned similarity to be meaningful for pairwise comparisons (i.e. the second property highlighted in the previous discussion), we make use of the following binary classification objective, used before in the settings discussed in (Auckenthaler et al., 2000) and (Monteiro et al., 2020), aimed at discriminating pairs of examples from the same and from different classes:
Lpair = 1 |T+| X x+2 T+ log( (S(E(x+)))) 1|T | X x 2 T log(1 (S(E(x )))), (3)
where stands for the logistic function, and x+ and x indicate pairs of examples denominated trials and denoted by T , i.e. T = {x0, x00}. The sums are taken over the set of positive or target trials T+ obtained from the training sample, i.e. those for which x0 and x00 belong to the same class, and the set of negative or non-target trials T . We further define the application of the encoder over a trial as E(T ) = {E(x0), E(x00)}
Initializing and updating the list of prototypes: C is initialized randomly such that its entries are i.i.d. sampled from a standard Gaussian distribution. We thus update C every iteration through a moving average given at iteration t by: Ct = Ct 1 + (1 )C⇤t 1, where 2 [0, 1] is a hyperparameter, and C⇤t 1 is a copy of Ct 1 where the rows corresponding to classes observed in the current minibatch are substituted by the average representations of each such classes.
Practical details: We now describe the design choices made so as to implement the discussed components and minimize L. Both E and S are implemented as neural networks, while C is a matrix where each row represents a prototype for a particular class.
Training is carried out with stochastic gradient descent with gradients estimated over minibatches of training data. In order to compute Lpair, each minibatch has to contain multiple examples from the same class otherwise T+ will be empty. We thus sample minibatches ensuring that is the case (c.f. appendix for further implementation details). We empirically observed including a standard classification loss accelerates convergence across all evaluations performed. We thus include a dense output layer to allow for computation of such a loss, which we denote by Laux. Training is thus carried out to minimize the total loss L = Lclass+Lpair+Laux. A highlevel training procedure is depicted in Algorithm 1 and illustrated in Figure 3 in the appendix. Algorithm 1 Training procedure. E ,S = InitializeModels() C = InitializePrototypes() repeat x, y = SampleMinibatch() z = E(x) C = UpdatePrototypes(z, y, C) z + = GetPositivePairs(z, y) z = GetNegativePairs(z, y) y 0 = DenseLayer(z) L = Lpair + Lclass + Laux E ,S = UpdateRule(E ,S,L) until Maximum number of iterations reached return E ,S, C
3.2 TESTING
We now define the set of tasks one can tackle using trained E , S , and C along with the inference mechanisms employed for each such task.
Multi-class classification: For the case where one is given a test instance x0 and desires to determine its class label y0, it will be given by the following classifier:
argmax i2[|Y|]
S(E(x0), Ci). (4)
Few-shot classification: If new classes are considered after training, repartitioning can be performed with few data points by creating a new set of prototypes C0 defined such that each entry corresponds to the average representation of each new class. Inference is thus performed following the schemed defined for multi-class classification.
Verification: Now assume one is given a trial {x0, x00} and desires to determine whether their respective labels are such that y0 = y00. One can then do the following:
S(E(x0),E(x00))>⌧ , (5)
where ⌧ is an user-defined decision threshold.
Retrieval: Given a test instance x0 and a gallery of instances denoted by X = {x1, x2, ..., xn} : xi 2 X 8 i 2 [n], determine k elements in X such that their labels match the underlying label y0 of x 0. The result will thus be:
k-argmax x002X
S(E(x0), E(x00)), (6)
where the operator k-argmax denotes repeating the argmax operator k times, removing the current result each time prior to the next argmax operation.
4 EVALUATION
Our goal is to devise a general framework which is able to perform different tasks. As such, the evaluation we decided to carry out consists of testing the proposed approach across a wide variety of tasks and modalities of data. Our evaluations consist of: multi-class classification, in which case we evaluate convolutional classifiers on MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky et al., 2009) and show improved robust accuracy. We further perform evaluation on object recognition tasks considering larger resolution images under domain shift. For that, we employ the standard PACS benchmark (Li et al., 2017) where we show that the prototypical classifiers with learned similarities introduced herein outperform recently introduced alternatives under domain shift, and mainly do so in the most challenging cases where a notable domain mismatch is observed (e.g. natural images vs. sketches). We then proceed to tasks that rely on pairwise comparisons of test instances and run evaluations on a large scale verification task on audio using the VoxCeleb corpus (Nagrani et al., 2017; Chung et al., 2018), and then evaluate our proposed approach on image retrieval tasks employing popular benchmarks such as CARS196 (Krause et al., 2013) and CUB200-2011 (Wah et al., 2011). Finally, the appendix contains evaluations discussing how to easily repartition the space so that new classes can be evaluated at test time, in which case we report experiments using miniImageNet (Vinyals et al., 2016). Ablations are further reported in the appendix using the full ImageNet (Deng et al., 2009) to show the importance of the use of the auxiliary classification loss.
We remark that the training procedure presented in Algorithm 1 is employed for training models used for all tasks discussed above, and no specialization to any task of interest is performed since we seek evidence regarding how effective the proposed approach is in yielding a general enough set of components (i.e. E , S , and C) which perform on pair or better than alternatives.
4.1 MULTI-CLASS CLASSIFICATION
4.1.1 ROBUSTNESS AGAINST ADVERSARIES
We report in Table 1 the accuracy obtained using convolutional classifiers trained on MNIST and CIFAR-10 considering both clean data as well as FGSM (Goodfellow et al., 2014) and PGD (Madry et al., 2017) attacks1 under L1 norm budgets, and considering the white-box access model so that the attacker has full access to the target model. Models trained using Algorithm 1 are compared against previously proposed defense strategies. Specifically, adversarial training (AT) (Madry et al., 2017), adversarial logit pairing (ALP) (Kannan et al., 2018), triplet loss adversarial training (TLA) (Mao et al., 2019), and TRADES (Zhang et al., 2019) are considered for comparison. The results given by an undefended model as reported by Mao et al. (2019) are further included for reference.
1Attacks were implemented using FoolBox: https://foolbox.readthedocs.io/en/stable/ index.html
A standard LeNet and the wide residual architecture introduced in (Madry et al., 2017) were employed for the cases of MNIST and CIFAR-10, respectively, and an attack budget of 0.3 and 8255 was considered when each such dataset was evaluated. We evaluated our models both with and without adversarial training, and report the results obtained when inference is performed using the scheme represented in expression 4, which we denote by SIM so as to indicate that the similarity model S is used for inference, as well as for the case where the auxiliary output layer used to compute Laux is used to predict label of test samples, which we indicate by DOL in a reference to dense output layer. In order to have a full white-box access model, each such output layer is exposed to the attacker in each evaluation so that attacks are created accounting for the specific inference procedure that will be used for testing. Based on the reported results, we verify that similarity/distance based inference is inherently less affected by small norm adversarial perturbations given that for both the cases of MNIST and CIFAR-10 with our undefended models the robust accuracy across considered attacks was much higher for the SIM case as compared to DOL as well as the undefended standard classifier. With adversarial training, both inference mechanisms yield higher performance than considered alternatives against several attackers, and more importantly, without affecting the clean accuracy as much as previous methods.
4.1.2 ROBUSTNESS UNDER DOMAIN SHIFT
We now assess the performance of the proposed classification strategy once domain shifts across train and test data occur. We do so by making use of the PACS domain-generalization benchmark (Li et al., 2017) consisting of 224x224 RGB images distributed into 7 classes and originated from four different domains: Photo (P), Art painting (A), Cartoon (C), and Sketch (S). We thus follow the leave-one-domain-out evaluation protocol such that data from three out of the four available domains are used for training while evaluation is carried out on the data from the left out domain. A comparison is carried out with recent methods specifically designed to enable out-of-distribution generalization introduced in (Dou et al., 2019) and (Chattopadhyay et al., 2020), as well as with the results reported in (Gulrajani & Lopez-Paz, 2020) where standard classifiers were evaluated against domain generalization approaches. Experiments were carried out using a ResNet-50 (He et al., 2016) pretrained on ImageNet.
Considering the average performance once each domain is left out, we once again observe improved robustness once similarity-based classification is employed when compared to both standard classifiers and domain generalization approaches, which indicates such classifiers rely less on domain factors that might correlate with labels on training domains. We hypothesize such a feature comes
from the metric learning framework used to train our models, i.e. domain information is less helpful when trying to minimize the combination of Lclass and Lpair, which renders resulting classifiers less dependent on the underlying domains used at training time. A gap in performance in our favor can be particularly observed for the evaluation cases where domains corresponding to cartoons and sketches are left out, given that such domains present a large discrepancy compared to the natural images that compose the bulk of training data. In fact, for the case photos on the other hand, in which case the underlying data correspond to natural images, the standard classifier discussed in (Gulrajani & Lopez-Paz, 2020) outperforms our model.
4.2 VERIFICATION
In order to perform evaluations that rely on pairwise comparisons of test instances, we consider the verification setting where trials corresponding to pairs of test instances are presented to the model. Its task is then to decide whether the examples in the trial belong to the same class. We thus make use of the VoxCeleb corpus (Nagrani et al., 2017; Chung et al., 2018) which consists of a large scale set of audio clips collected from videos of interviews available online.
We compared models trained on the second release of the corpus, which is composed of audio recordings from 5994 different speakers, against a set of published results on the three test partitions made available along with that release: (i)-VoxCeleb1 Test set, which correspond to data obtained from 40 speakers, (ii)-VoxCeleb1-Extended, which is given by the complete first release of the corpus and contains 1251 speakers, and (iii)-VoxCeleb1-Hard, which is made up of a subset of the data from VoxCeleb1-Extended yielding trials known to be hard to distinguish. We highlight that the set of speakers represented in all test partitions is disjoint to that relative to train data. The encoder E in this case corresponds to the 1-dimensional convolutional model introduced by Snyder et al. (2018). Details on the audio pre-processing and feature extraction are included in the appendix.
Results are reported in terms of equal error rate (EER) in Table 3. EER corresponds to any coordinate of the point in the detection error tradeoff curve where false positive and miss detection rates match (the lower the better). In this case, we take advantage of the fact that cosine similarities can be further used to compare representations of test trials, and observed combining it with the scores given by S by simply summing both similarities yields a further boost in performance.
4.3 IMAGE RETRIEVAL
We further verified the performance of the proposed approach on another set of tasks which require comparisons of pairs of examples. In this case, we considered the retrieval setting where a test example is compared against a gallery and k “similar” examples need to be selected from that gallery.
We thus make use of the CARS196 (Krause et al., 2013) and CUB200-2011 (Wah et al., 2011) datasets and closely follow the evaluation protocol discussed by Wu et al. (2017).
Results are reported in terms of Recall@k (Oh Song et al., 2016) (the higher the better), or R@k for short, and summarized in Figures 1 and 2, while the complete set of results is reported in Tables 7 and 8 in the appendix. Compared approaches consist of several metric learning methods designed for the retrieval problem. We use the indicators + and - to refer to the highest and lowest performances amongst the considered baselines. Results were obtained considering a ResNet-50 pretrained on ImageNet and fine tuned on each of the considered datasets. As a reference, we further report the results obtained by the pretrained model prior to fine tuning. We thus claim the proposed approach is competitive in that its performance lies close to the + line (which corresponds to a strong baseline using an ensemble of metrics approach (Sanakoyeu et al., 2019)) and outperforms most of the compared methods while using a much simpler and general training procedure requiring no special mining strategy of hard triplets, and using moderate batch sizes enabling practical training in single GPU hardware, as opposed to complex metric learning pipelines.
5@1 5@2 5@4 5@8 5@16
40%
50%
60%
70%
80%
90%
100%
Image1et 2urs - +
Figure 1: R@K evaluation of proposed methods on the CARS196 dataset. 5@1 5@2 5@4 5@8 5@16
40%
50%
60%
70%
80%
90%
100%
Image1et 2urs - +
Figure 2: R@K evaluation of proposed methods on the CUB200-2011 dataset.
5 CONCLUSION
We introduced a metric learning scheme which enables different types of tasks to be performed using the same set of models: an encoder responsible for embedding data into a lower dimensional space, a similarity model which outputs a similarity score given a pair of embeddings, and a set of class prototypes where each one represents a class observed during training. We then presented empirical evidence showing such a setting yields improvements in long standing issues such as adversarial robustness, since small perturbations in norm were observed to have a lesser effect on distance-based inference compared to standard classifiers, as well as robustness against distribution shifts, which indicates the posed training strategy is more effective in avoiding models that rely on correlations between training domain factors and labels, since domain information is not as helpful for such training scheme as it can be for the case of maximum likelihood estimation with standard classifiers. Moreover, performance across a set of tasks such as verification and image retrieval further showed classifiers defined under the proposed scheme perform competitively or better than alternatives designed targeting their particular applications, thus representing a step towards defining models that fit in the one model to solve them all category.
In terms of future work, we conjecture that the scope of the proposed setting can be further enlarged by utilizing the kernel function given by K = S , thus defining a strategy to learn kernels tailored to the data of interest, similarly to past work such as (Wenliang et al., 2019) and (Liu et al., 2020). Doing so might then be effective for performing even further tasks such as non-parametric 2-sample tests based on MMD scores (Gretton et al., 2012), as well as outlier/novelty detection, in which case approaches such one-class SVMs (Schölkopf et al., 2000) using the learned kernel can be considered. We further remark that, in order to define a Mercer’s kernel using K, one can then consider inducing symmetry through a kernel KS defined by KS = f(K(x0, x00),K(x00, x0)), where f : R2 7! R is symmetric, e.g. f(·, ·) = max(·, ·). | 1. What is the main contribution of the paper regarding classification problems?
2. What are the strengths and weaknesses of the proposed approach, particularly in terms of its originality and quality?
3. Do you have any concerns or questions about the presented ideas, such as the choice of similarity function or the representation of new class prototypes?
4. How does the reviewer assess the significance and impact of the work, especially considering prior works in the field? | Review | Review
The authors propose to solve classification problems by
using nearest neighbor distances as features to the softmax.
augment the loss function with an additional similarity loss on example pairs induced from class-labels.
Using the above the authors show a) robustness to adverserial/domain-shifted examples, b) retrieval and verification capabilities.
Clarity
I thank the authors for using understandable simple notations and making the key points of the work very clear.
Originality
The various ingredients proposed by the authors have been studied before, e.g. see [1] (and their references). Augmenting the feature space by using neighbors is the key idea in any instance-based learning approaches (e.g. Local linear regression) and I wasn't convinced that can count as a key contribution of this work.
Jointly training classification and similarity function has also been proposed before (e.g. see [2])
I agree that I couldn't find any work that explicitly combined both, though I'm viewing this as limited novelty.
Quality
While I liked the clear presentation, the authors could focus more on why some of the choices were relevant and how it affected the experiments. For example, -- Why is a new class prototype represented only the average of the neighbors? It's not the case that existing class prototypes have that property, i.e. it's not the optimal solution of the algorithm minimizing the loss function. -- Have the authors tried just using the encoder learnt for classification directly for retrieval? -- Does the choice of similarity function matter?
It would've been nice to understand what aspects contributed to better metrics (be it retrieval/classification/verification etc).
Significance
Given the above comments, I'm not convinced that is of sufficient significance for acceptance.
I'm reasonably confident in my review. I am, however, not familiar whether the datasets used are widely accepted benchmark datasets for these tasks.
[1] Combining instance-based learning and logistic regression for multilabel classification [2] Distance Metric Learning with Joint Representation Diversification |
ICLR | Title
Learning Semantic Similarities for Prototypical Classifiers
Abstract
Recent metric learning approaches parametrize semantic similarity measures through the use of an encoder trained along with a similarity model, which operates over pairs of representations. We extend such a setting and enable its use in tasks including multi-class classification in order to tackle known issues observed in standard classifiers such as their lack of robustness to out-of-distribution data. We do so by further learning a set of class prototypes, each one representing a particular class. Training is carried out so that each encoded example is pushed towards the prototype corresponding to its class, and test instances are assigned to the class corresponding to the prototype they are closest to. We thus provide empirical evidence showing the proposed setting is able to match object recognition performance of standard classifiers on common benchmarks, while presenting much improved robustness to adversarial examples and distribution shifts. We further show such a model is effective for tasks other than classification, including those requiring pairwise comparisons such as verification and retrieval. Finally, we discuss a simple scheme for few-shot learning of new classes where only the set of prototypes needs to be updated, yielding competitive performance.
1 INTRODUCTION
Despite the performance boost observed by multi-class classifiers based on neural networks compared to alternative approaches, as evidenced since (Krizhevsky et al., 2012), it is now well-known that such a modeling framework suffers from shortcomings that limit its potential deployment in real-world applications. We highlight below some of such limitations:
• A worth mentioning threat regarding the use of current classifiers is the existence of adversarial examples, as discussed originally by Szegedy et al. (2013) and Goodfellow et al. (2014). In fact, it is a known property of neural networks that it is possible to impose large variations in their outputs by slightly changing their inputs. Attackers might then exploit such property to fool deployed models into making certain decisions they might benefit from. Several methods have been proposed in recent literature in order to fool state-of-theart classifiers with changes to the input that are imperceptible to humans.
• The lack of robustness to distribution shifts across train and test data is a further issue that appears in practice and is known to affect performance of current classifiers. For example, an object recognizer trained on natural images will likely observe a performance degradation once test data consists of drawings from the same classes, for instance. Such a shift across train and test data sources is a direct violation of the i.i.d. assumption on top of which most of the supervised learning generalization guarantees are built within the empirical risk minimization framework. Recent literature in domain adaptation has introduced more general settings relaxing the i.i.d. assumption to some extent to help coping with situations found in practice. However, there’s still much room for improvement, as most approaches require data from a particular target data distribution. This requirement is still unpractical given that a large number of possible unseen test conditions might appear for a deployed model (Albuquerque et al., 2019).
• Yet another limitation is the case of small data samples since large classifiers in terms of parameter count require large amounts of data so as to achieve high performance. In
several practical situations, however, collecting (and labeling) large datasets is prohibitively costly. Moreover, standard classifiers are bounded to the label set they were presented to during training, while in practice one would ideally be able to extend a trained classifier to predict new classes observed after training. Transfer learning schemes thus appeared as a natural strategy to overcome both issues by enabling fast adaptation once data from novel sources is made available so that: (i)-one can leverage a pretrained model on large datasets and adapt it to the data of interest which is scarce; (ii)-a classifier does not need to be trained from scratch whenever new classes are taken into account. As such, devising approaches enabling inexpensive adaptation of trained classifiers to new data became a relevant research direction, often referred to as few-shot learning, yielding approaches such as meta-learning or learning to learn (Schmidhuber, 1987; Bengio et al., 1992; Ravi & Larochelle, 2016; Finn et al., 2017), as well as geometric methods (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018).
In this contribution, our main goal is then to develop classification strategies that address to some extent the issues discussed above. As such, the research question we pose is whether one can define multi-class classification approaches which are more robust against adversaries and distribution shifts while supporting adaptation to novel classes, observed after training in small samples. We thus tackle such problem using approaches which leverage both the set of methods commonly grouped under the term metric learning, as well as the geometric approaches discussed above for few-shotclassification; prototypical networks in particular (Snell et al., 2017). In further detail, we focus on metric learning settings where both an encoder and a similarity or distance models are trained jointly (Koch et al., 2015; Garcia & Vogiatzis, 2019; Monteiro et al., 2020), but augment such setting with a set of class prototypes used in order to assign points to classes.
Our method thus comprises three main components: (i)-an encoder that embeds data into a lower dimensional space; (ii)-a similarity model which maps a pair of concatenated representations into a similarity score, and finally (iii)-a list of class prototypes in which case each one summarizes a whole class into a vector in the embedding space. Based on said components, we can then devise different inference mechanisms depending on the task at hand. For the case of multi-class classification, for instance, at test time, one can predict the class of a particular test instance through measuring its similarity against each prototype and assigning it to the class whose prototype it is most similar to. Similarly, tasks relying on pairwise comparisons can be performed such as verification, i.e. comparing two data instances and determining whether they belong to the same class, or retrieval, i.e. comparing a test instance against a gallery and determining the k elements in the gallery the considered test instance is most similar to. Moreover, each time new classes appear, adapting the model consists of updating the list of prototypes only, while keeping the encoder and similarity unchanged, thus enabling fast adaptation and avoiding issues such as forgetting past classes or overfitting to the new ones. In summary, our contributions are as follows:
1. We introduce an alternative multi-class classification approach based on metric learning methods, which can match the performance of standard classifiers, while offering improved robustness against adversaries and distribution shifts with respect to observed train data.
2. The proposed setting is further shown to perform well in tasks involving pairwise comparison such as verification and image retrieval, thus providing an approach that allows a single model to be used across multiple tasks.
3. The proposed approach supports the inclusion of new classes appearing posterior to training, which we do by simply repartitioning the space using small data samples. We observed doing so yields a simple yet competitive mechanism for few-shot classification.
2 BACKGROUND AND RELATED WORK
Metric learning approaches are concerned with representing data in a metric space where semantic properties can be inferred using distances. The literature in this field can be classified in terms of two main groups of approaches trying to do so under two distinct settings, and we will refer to those as distance metric learning, introduced originally by Xing et al. (2003), and deep metric learning represented most notably by siamese networks (Bromley et al., 1994; Chopra et al., 2005; Hadsell et al., 2006). In the case of the former, one learns a so-called Mahalanobis distance which, given
x, y 2 Rd, will have the form: p (x y)|W (x y), where W 2 Rd⇥d is positive semidefinite. Learning is designed so that p (x y)|W (x y) will be small for semantically close x and y. Several extensions of the setting introduced by Xing et al. (2003) were proposed (Shalev-Shwartz et al., 2004; Globerson & Roweis, 2006; Weinberger & Saul, 2009; Ying & Li, 2012). For the case of deep metric learning, on the other hand, one’s interest is to learn a non-linear encoder E : RD 7! Rd, and often D d, so that some standard distance such as the one based on the L2 norm ||E(x) E(y)||2 will be small for semantically close x and y. While the two discussed settings are equivalent in their final goal, their focus is not the same: distance metric learning approaches focus on learning a semantically meaningful distance measure while deep metric learning methods focus on projecting the data onto a space where standard distances are meaningful.
A relatively recent research direction consists in the combination of the above described settings, i.e. jointly training an encoder along with a distance/similarity model. This is the case in (Koch et al., 2015) where a symmetric model was used to map the absolute difference of a pair of representations into a similarity score. In (Garcia & Vogiatzis, 2019), training of a distance/similarity model is done by imitation learning of cosine similarities measured between representations, which authors claim to simplify training compared to the direct use of cosine scores. In (Pitis et al., 2020), authors focus on distance models supporting asymmetric properties of the data, while still satisfying the triangle inequality. Learned Bregman divergences were evaluated by Cilingir et al. (2020), and completely unconstrained similarity models, in the sense that any property such as symmetry is imposed in the learned distance, were proposed in (Monteiro et al., 2020) for verification tasks. Learnable similarities parametrized by neural networks were further employed in (Wenliang et al., 2019; Liu et al., 2020) for the implementation of learned kernels, used to perform MMD-based (Gretton et al., 2012) 2-sample tests. For the case of few-shot learning under geometric approaches, so-called prototypical networks (Snell et al., 2017) follow a similar idea to that of metric learning in the sense that training consists of building a metric space where distances are indicative of properties of the data. However, that setting introduced the idea of partitioning the said space using class prototypes, i.e. a set of vectors representing each class, thus enabling its use to classification tasks since one can assign a test instance to the class corresponding to its closest prototype. That approach was also used under few-shot classification settings, in which case a new partitioning of the space is computed once small samples corresponding to new classes are presented to the model. We thus propose a strategy to extend the setting in (Monteiro et al., 2020) and include a partitioning with prototypes in the learned pseudo metric space so that the final model can be used to perform tasks such as multi-class and few-shot classification, while still supporting tasks involving pairwise comparisons.
3 SOLVING DIFFERENT TASKS THROUGH LEARNED SIMILARITIES
Assume (x, y) represents instances from X ⇥ Y , where X ✓ RD is the data space while Y is the space of labels, which will be always a discrete set in the cases considered herein. We thus consider the setting where the following components are to be learned: an encoder E : RD 7! Rd, a similarity model S : Rd ⇥ Rd 7! R, and a set of prototypes C 2 R|Y|⇥d. As it will be further discussed, such three components can be then used to perform different types of inference regarding properties of underlying data, and thus solve different tasks.
3.1 TRAINING
Training is carried out to enforce the following properties: i-the similarity as measured between a particular example and the prototype corresponding to its class labels should be high; ii-the similarities measured between examples from the same class should be high, while examples from different classes should yield a low similarity score. We thus design training objectives aimed at enforcing such properties. For the first property, we consider a training sample of size m and employ the standard multi-class cross-entropy criterion, but use the similarity measured between a training instance and each prototype as the set of logits as opposed to output layers defined by an affine transformation, commonly employed in standard classifiers, i.e. we perform maximum likelihood estimation on the categorical conditional distribution defined by:
P (Y|x0) = softmax(S(E(x0), C1:|Y|)), (1)
where Ci, i 2 [|Y|], indicates the prototype corresponding to class i. The corresponding training loss, denoted Lclass, will be then given by:
Lclass = 1
m
mX
i=1
log e S(E(xi),Cyi )
P|Y| j=1 e S(E(xi),Cj) , (2)
where xi and yi indicate the i-th training example. In order for the learned similarity to be meaningful for pairwise comparisons (i.e. the second property highlighted in the previous discussion), we make use of the following binary classification objective, used before in the settings discussed in (Auckenthaler et al., 2000) and (Monteiro et al., 2020), aimed at discriminating pairs of examples from the same and from different classes:
Lpair = 1 |T+| X x+2 T+ log( (S(E(x+)))) 1|T | X x 2 T log(1 (S(E(x )))), (3)
where stands for the logistic function, and x+ and x indicate pairs of examples denominated trials and denoted by T , i.e. T = {x0, x00}. The sums are taken over the set of positive or target trials T+ obtained from the training sample, i.e. those for which x0 and x00 belong to the same class, and the set of negative or non-target trials T . We further define the application of the encoder over a trial as E(T ) = {E(x0), E(x00)}
Initializing and updating the list of prototypes: C is initialized randomly such that its entries are i.i.d. sampled from a standard Gaussian distribution. We thus update C every iteration through a moving average given at iteration t by: Ct = Ct 1 + (1 )C⇤t 1, where 2 [0, 1] is a hyperparameter, and C⇤t 1 is a copy of Ct 1 where the rows corresponding to classes observed in the current minibatch are substituted by the average representations of each such classes.
Practical details: We now describe the design choices made so as to implement the discussed components and minimize L. Both E and S are implemented as neural networks, while C is a matrix where each row represents a prototype for a particular class.
Training is carried out with stochastic gradient descent with gradients estimated over minibatches of training data. In order to compute Lpair, each minibatch has to contain multiple examples from the same class otherwise T+ will be empty. We thus sample minibatches ensuring that is the case (c.f. appendix for further implementation details). We empirically observed including a standard classification loss accelerates convergence across all evaluations performed. We thus include a dense output layer to allow for computation of such a loss, which we denote by Laux. Training is thus carried out to minimize the total loss L = Lclass+Lpair+Laux. A highlevel training procedure is depicted in Algorithm 1 and illustrated in Figure 3 in the appendix. Algorithm 1 Training procedure. E ,S = InitializeModels() C = InitializePrototypes() repeat x, y = SampleMinibatch() z = E(x) C = UpdatePrototypes(z, y, C) z + = GetPositivePairs(z, y) z = GetNegativePairs(z, y) y 0 = DenseLayer(z) L = Lpair + Lclass + Laux E ,S = UpdateRule(E ,S,L) until Maximum number of iterations reached return E ,S, C
3.2 TESTING
We now define the set of tasks one can tackle using trained E , S , and C along with the inference mechanisms employed for each such task.
Multi-class classification: For the case where one is given a test instance x0 and desires to determine its class label y0, it will be given by the following classifier:
argmax i2[|Y|]
S(E(x0), Ci). (4)
Few-shot classification: If new classes are considered after training, repartitioning can be performed with few data points by creating a new set of prototypes C0 defined such that each entry corresponds to the average representation of each new class. Inference is thus performed following the schemed defined for multi-class classification.
Verification: Now assume one is given a trial {x0, x00} and desires to determine whether their respective labels are such that y0 = y00. One can then do the following:
S(E(x0),E(x00))>⌧ , (5)
where ⌧ is an user-defined decision threshold.
Retrieval: Given a test instance x0 and a gallery of instances denoted by X = {x1, x2, ..., xn} : xi 2 X 8 i 2 [n], determine k elements in X such that their labels match the underlying label y0 of x 0. The result will thus be:
k-argmax x002X
S(E(x0), E(x00)), (6)
where the operator k-argmax denotes repeating the argmax operator k times, removing the current result each time prior to the next argmax operation.
4 EVALUATION
Our goal is to devise a general framework which is able to perform different tasks. As such, the evaluation we decided to carry out consists of testing the proposed approach across a wide variety of tasks and modalities of data. Our evaluations consist of: multi-class classification, in which case we evaluate convolutional classifiers on MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky et al., 2009) and show improved robust accuracy. We further perform evaluation on object recognition tasks considering larger resolution images under domain shift. For that, we employ the standard PACS benchmark (Li et al., 2017) where we show that the prototypical classifiers with learned similarities introduced herein outperform recently introduced alternatives under domain shift, and mainly do so in the most challenging cases where a notable domain mismatch is observed (e.g. natural images vs. sketches). We then proceed to tasks that rely on pairwise comparisons of test instances and run evaluations on a large scale verification task on audio using the VoxCeleb corpus (Nagrani et al., 2017; Chung et al., 2018), and then evaluate our proposed approach on image retrieval tasks employing popular benchmarks such as CARS196 (Krause et al., 2013) and CUB200-2011 (Wah et al., 2011). Finally, the appendix contains evaluations discussing how to easily repartition the space so that new classes can be evaluated at test time, in which case we report experiments using miniImageNet (Vinyals et al., 2016). Ablations are further reported in the appendix using the full ImageNet (Deng et al., 2009) to show the importance of the use of the auxiliary classification loss.
We remark that the training procedure presented in Algorithm 1 is employed for training models used for all tasks discussed above, and no specialization to any task of interest is performed since we seek evidence regarding how effective the proposed approach is in yielding a general enough set of components (i.e. E , S , and C) which perform on pair or better than alternatives.
4.1 MULTI-CLASS CLASSIFICATION
4.1.1 ROBUSTNESS AGAINST ADVERSARIES
We report in Table 1 the accuracy obtained using convolutional classifiers trained on MNIST and CIFAR-10 considering both clean data as well as FGSM (Goodfellow et al., 2014) and PGD (Madry et al., 2017) attacks1 under L1 norm budgets, and considering the white-box access model so that the attacker has full access to the target model. Models trained using Algorithm 1 are compared against previously proposed defense strategies. Specifically, adversarial training (AT) (Madry et al., 2017), adversarial logit pairing (ALP) (Kannan et al., 2018), triplet loss adversarial training (TLA) (Mao et al., 2019), and TRADES (Zhang et al., 2019) are considered for comparison. The results given by an undefended model as reported by Mao et al. (2019) are further included for reference.
1Attacks were implemented using FoolBox: https://foolbox.readthedocs.io/en/stable/ index.html
A standard LeNet and the wide residual architecture introduced in (Madry et al., 2017) were employed for the cases of MNIST and CIFAR-10, respectively, and an attack budget of 0.3 and 8255 was considered when each such dataset was evaluated. We evaluated our models both with and without adversarial training, and report the results obtained when inference is performed using the scheme represented in expression 4, which we denote by SIM so as to indicate that the similarity model S is used for inference, as well as for the case where the auxiliary output layer used to compute Laux is used to predict label of test samples, which we indicate by DOL in a reference to dense output layer. In order to have a full white-box access model, each such output layer is exposed to the attacker in each evaluation so that attacks are created accounting for the specific inference procedure that will be used for testing. Based on the reported results, we verify that similarity/distance based inference is inherently less affected by small norm adversarial perturbations given that for both the cases of MNIST and CIFAR-10 with our undefended models the robust accuracy across considered attacks was much higher for the SIM case as compared to DOL as well as the undefended standard classifier. With adversarial training, both inference mechanisms yield higher performance than considered alternatives against several attackers, and more importantly, without affecting the clean accuracy as much as previous methods.
4.1.2 ROBUSTNESS UNDER DOMAIN SHIFT
We now assess the performance of the proposed classification strategy once domain shifts across train and test data occur. We do so by making use of the PACS domain-generalization benchmark (Li et al., 2017) consisting of 224x224 RGB images distributed into 7 classes and originated from four different domains: Photo (P), Art painting (A), Cartoon (C), and Sketch (S). We thus follow the leave-one-domain-out evaluation protocol such that data from three out of the four available domains are used for training while evaluation is carried out on the data from the left out domain. A comparison is carried out with recent methods specifically designed to enable out-of-distribution generalization introduced in (Dou et al., 2019) and (Chattopadhyay et al., 2020), as well as with the results reported in (Gulrajani & Lopez-Paz, 2020) where standard classifiers were evaluated against domain generalization approaches. Experiments were carried out using a ResNet-50 (He et al., 2016) pretrained on ImageNet.
Considering the average performance once each domain is left out, we once again observe improved robustness once similarity-based classification is employed when compared to both standard classifiers and domain generalization approaches, which indicates such classifiers rely less on domain factors that might correlate with labels on training domains. We hypothesize such a feature comes
from the metric learning framework used to train our models, i.e. domain information is less helpful when trying to minimize the combination of Lclass and Lpair, which renders resulting classifiers less dependent on the underlying domains used at training time. A gap in performance in our favor can be particularly observed for the evaluation cases where domains corresponding to cartoons and sketches are left out, given that such domains present a large discrepancy compared to the natural images that compose the bulk of training data. In fact, for the case photos on the other hand, in which case the underlying data correspond to natural images, the standard classifier discussed in (Gulrajani & Lopez-Paz, 2020) outperforms our model.
4.2 VERIFICATION
In order to perform evaluations that rely on pairwise comparisons of test instances, we consider the verification setting where trials corresponding to pairs of test instances are presented to the model. Its task is then to decide whether the examples in the trial belong to the same class. We thus make use of the VoxCeleb corpus (Nagrani et al., 2017; Chung et al., 2018) which consists of a large scale set of audio clips collected from videos of interviews available online.
We compared models trained on the second release of the corpus, which is composed of audio recordings from 5994 different speakers, against a set of published results on the three test partitions made available along with that release: (i)-VoxCeleb1 Test set, which correspond to data obtained from 40 speakers, (ii)-VoxCeleb1-Extended, which is given by the complete first release of the corpus and contains 1251 speakers, and (iii)-VoxCeleb1-Hard, which is made up of a subset of the data from VoxCeleb1-Extended yielding trials known to be hard to distinguish. We highlight that the set of speakers represented in all test partitions is disjoint to that relative to train data. The encoder E in this case corresponds to the 1-dimensional convolutional model introduced by Snyder et al. (2018). Details on the audio pre-processing and feature extraction are included in the appendix.
Results are reported in terms of equal error rate (EER) in Table 3. EER corresponds to any coordinate of the point in the detection error tradeoff curve where false positive and miss detection rates match (the lower the better). In this case, we take advantage of the fact that cosine similarities can be further used to compare representations of test trials, and observed combining it with the scores given by S by simply summing both similarities yields a further boost in performance.
4.3 IMAGE RETRIEVAL
We further verified the performance of the proposed approach on another set of tasks which require comparisons of pairs of examples. In this case, we considered the retrieval setting where a test example is compared against a gallery and k “similar” examples need to be selected from that gallery.
We thus make use of the CARS196 (Krause et al., 2013) and CUB200-2011 (Wah et al., 2011) datasets and closely follow the evaluation protocol discussed by Wu et al. (2017).
Results are reported in terms of Recall@k (Oh Song et al., 2016) (the higher the better), or R@k for short, and summarized in Figures 1 and 2, while the complete set of results is reported in Tables 7 and 8 in the appendix. Compared approaches consist of several metric learning methods designed for the retrieval problem. We use the indicators + and - to refer to the highest and lowest performances amongst the considered baselines. Results were obtained considering a ResNet-50 pretrained on ImageNet and fine tuned on each of the considered datasets. As a reference, we further report the results obtained by the pretrained model prior to fine tuning. We thus claim the proposed approach is competitive in that its performance lies close to the + line (which corresponds to a strong baseline using an ensemble of metrics approach (Sanakoyeu et al., 2019)) and outperforms most of the compared methods while using a much simpler and general training procedure requiring no special mining strategy of hard triplets, and using moderate batch sizes enabling practical training in single GPU hardware, as opposed to complex metric learning pipelines.
5@1 5@2 5@4 5@8 5@16
40%
50%
60%
70%
80%
90%
100%
Image1et 2urs - +
Figure 1: R@K evaluation of proposed methods on the CARS196 dataset. 5@1 5@2 5@4 5@8 5@16
40%
50%
60%
70%
80%
90%
100%
Image1et 2urs - +
Figure 2: R@K evaluation of proposed methods on the CUB200-2011 dataset.
5 CONCLUSION
We introduced a metric learning scheme which enables different types of tasks to be performed using the same set of models: an encoder responsible for embedding data into a lower dimensional space, a similarity model which outputs a similarity score given a pair of embeddings, and a set of class prototypes where each one represents a class observed during training. We then presented empirical evidence showing such a setting yields improvements in long standing issues such as adversarial robustness, since small perturbations in norm were observed to have a lesser effect on distance-based inference compared to standard classifiers, as well as robustness against distribution shifts, which indicates the posed training strategy is more effective in avoiding models that rely on correlations between training domain factors and labels, since domain information is not as helpful for such training scheme as it can be for the case of maximum likelihood estimation with standard classifiers. Moreover, performance across a set of tasks such as verification and image retrieval further showed classifiers defined under the proposed scheme perform competitively or better than alternatives designed targeting their particular applications, thus representing a step towards defining models that fit in the one model to solve them all category.
In terms of future work, we conjecture that the scope of the proposed setting can be further enlarged by utilizing the kernel function given by K = S , thus defining a strategy to learn kernels tailored to the data of interest, similarly to past work such as (Wenliang et al., 2019) and (Liu et al., 2020). Doing so might then be effective for performing even further tasks such as non-parametric 2-sample tests based on MMD scores (Gretton et al., 2012), as well as outlier/novelty detection, in which case approaches such one-class SVMs (Schölkopf et al., 2000) using the learned kernel can be considered. We further remark that, in order to define a Mercer’s kernel using K, one can then consider inducing symmetry through a kernel KS defined by KS = f(K(x0, x00),K(x00, x0)), where f : R2 7! R is symmetric, e.g. f(·, ·) = max(·, ·). | 1. What is the main contribution of the paper?
2. What are the strengths of the proposed approach?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How do you assess the novelty and significance of the proposed solution?
5. Are there any concerns about the experimental results presented in the paper? | Review | Review
Pros:
It is an interesting idea to consider a general model that is capable of handling multiple types of tasks.
Experiments show its performance on many types of tasks against non-general purpose models.
Concerns:
However, the tasks chosen by this paper are highly correlated, and it seems straightforward to combine them. Furthermore, the paper does not point out the underlying challenges, which prevent us from using naive solutions, and the relations between the challenges and the proposed solution. For example, verification and image retrieval both usually contain a procedure to compare image pairs. Once we have a deep metric learning model to encode and compare two instances, like images or audio clips, we can perform both verification and retrieval using a single model. Also, class prototypes can also be constructed by averaging encoded image embeddings. Once we have class prototypes, we can perform both multi-class and few-shot classification. Therefore, instead of simply claiming that most deep metric learning methods are only for image retrieval and presuming that they will not work on other tasks, I think they should be treated as baselines for comparisons with the proposed method. Also, Gulrajani & Lopez-Paz (2020) is regarded as a general purpose method in Table 2. Thus, it should also be compared in other tasks to show the effectiveness of the proposed solution as a general model.
For the model design, the proposed method is related to (with partially idea-overlapping) to proxy-based metric learning methods, like SoftTriple [1] and Proxy-NCA [2], that utilize class embeddings to attract similar images and push away dissimilar images. However, this paper does not cite or compare these related papers.
For the experimental results, the proposed method achieves the most outstanding performance on multi-classification with domain-shifts. However, the improvements are not significant in other tasks, especially in image retrieval and multi-class classification with adversarial attacks. [1] 2019 ICCV, Qian et al., “SoftTriple Loss: Deep Metric Learning Without Triplet Sampling.” [2] 2017 ICCV, Movshovitz-Attias et al., “No Fuss Distance Metric Learning using Proxies.” |
ICLR | Title
Learning Semantic Similarities for Prototypical Classifiers
Abstract
Recent metric learning approaches parametrize semantic similarity measures through the use of an encoder trained along with a similarity model, which operates over pairs of representations. We extend such a setting and enable its use in tasks including multi-class classification in order to tackle known issues observed in standard classifiers such as their lack of robustness to out-of-distribution data. We do so by further learning a set of class prototypes, each one representing a particular class. Training is carried out so that each encoded example is pushed towards the prototype corresponding to its class, and test instances are assigned to the class corresponding to the prototype they are closest to. We thus provide empirical evidence showing the proposed setting is able to match object recognition performance of standard classifiers on common benchmarks, while presenting much improved robustness to adversarial examples and distribution shifts. We further show such a model is effective for tasks other than classification, including those requiring pairwise comparisons such as verification and retrieval. Finally, we discuss a simple scheme for few-shot learning of new classes where only the set of prototypes needs to be updated, yielding competitive performance.
1 INTRODUCTION
Despite the performance boost observed by multi-class classifiers based on neural networks compared to alternative approaches, as evidenced since (Krizhevsky et al., 2012), it is now well-known that such a modeling framework suffers from shortcomings that limit its potential deployment in real-world applications. We highlight below some of such limitations:
• A worth mentioning threat regarding the use of current classifiers is the existence of adversarial examples, as discussed originally by Szegedy et al. (2013) and Goodfellow et al. (2014). In fact, it is a known property of neural networks that it is possible to impose large variations in their outputs by slightly changing their inputs. Attackers might then exploit such property to fool deployed models into making certain decisions they might benefit from. Several methods have been proposed in recent literature in order to fool state-of-theart classifiers with changes to the input that are imperceptible to humans.
• The lack of robustness to distribution shifts across train and test data is a further issue that appears in practice and is known to affect performance of current classifiers. For example, an object recognizer trained on natural images will likely observe a performance degradation once test data consists of drawings from the same classes, for instance. Such a shift across train and test data sources is a direct violation of the i.i.d. assumption on top of which most of the supervised learning generalization guarantees are built within the empirical risk minimization framework. Recent literature in domain adaptation has introduced more general settings relaxing the i.i.d. assumption to some extent to help coping with situations found in practice. However, there’s still much room for improvement, as most approaches require data from a particular target data distribution. This requirement is still unpractical given that a large number of possible unseen test conditions might appear for a deployed model (Albuquerque et al., 2019).
• Yet another limitation is the case of small data samples since large classifiers in terms of parameter count require large amounts of data so as to achieve high performance. In
several practical situations, however, collecting (and labeling) large datasets is prohibitively costly. Moreover, standard classifiers are bounded to the label set they were presented to during training, while in practice one would ideally be able to extend a trained classifier to predict new classes observed after training. Transfer learning schemes thus appeared as a natural strategy to overcome both issues by enabling fast adaptation once data from novel sources is made available so that: (i)-one can leverage a pretrained model on large datasets and adapt it to the data of interest which is scarce; (ii)-a classifier does not need to be trained from scratch whenever new classes are taken into account. As such, devising approaches enabling inexpensive adaptation of trained classifiers to new data became a relevant research direction, often referred to as few-shot learning, yielding approaches such as meta-learning or learning to learn (Schmidhuber, 1987; Bengio et al., 1992; Ravi & Larochelle, 2016; Finn et al., 2017), as well as geometric methods (Koch et al., 2015; Vinyals et al., 2016; Snell et al., 2017; Sung et al., 2018).
In this contribution, our main goal is then to develop classification strategies that address to some extent the issues discussed above. As such, the research question we pose is whether one can define multi-class classification approaches which are more robust against adversaries and distribution shifts while supporting adaptation to novel classes, observed after training in small samples. We thus tackle such problem using approaches which leverage both the set of methods commonly grouped under the term metric learning, as well as the geometric approaches discussed above for few-shotclassification; prototypical networks in particular (Snell et al., 2017). In further detail, we focus on metric learning settings where both an encoder and a similarity or distance models are trained jointly (Koch et al., 2015; Garcia & Vogiatzis, 2019; Monteiro et al., 2020), but augment such setting with a set of class prototypes used in order to assign points to classes.
Our method thus comprises three main components: (i)-an encoder that embeds data into a lower dimensional space; (ii)-a similarity model which maps a pair of concatenated representations into a similarity score, and finally (iii)-a list of class prototypes in which case each one summarizes a whole class into a vector in the embedding space. Based on said components, we can then devise different inference mechanisms depending on the task at hand. For the case of multi-class classification, for instance, at test time, one can predict the class of a particular test instance through measuring its similarity against each prototype and assigning it to the class whose prototype it is most similar to. Similarly, tasks relying on pairwise comparisons can be performed such as verification, i.e. comparing two data instances and determining whether they belong to the same class, or retrieval, i.e. comparing a test instance against a gallery and determining the k elements in the gallery the considered test instance is most similar to. Moreover, each time new classes appear, adapting the model consists of updating the list of prototypes only, while keeping the encoder and similarity unchanged, thus enabling fast adaptation and avoiding issues such as forgetting past classes or overfitting to the new ones. In summary, our contributions are as follows:
1. We introduce an alternative multi-class classification approach based on metric learning methods, which can match the performance of standard classifiers, while offering improved robustness against adversaries and distribution shifts with respect to observed train data.
2. The proposed setting is further shown to perform well in tasks involving pairwise comparison such as verification and image retrieval, thus providing an approach that allows a single model to be used across multiple tasks.
3. The proposed approach supports the inclusion of new classes appearing posterior to training, which we do by simply repartitioning the space using small data samples. We observed doing so yields a simple yet competitive mechanism for few-shot classification.
2 BACKGROUND AND RELATED WORK
Metric learning approaches are concerned with representing data in a metric space where semantic properties can be inferred using distances. The literature in this field can be classified in terms of two main groups of approaches trying to do so under two distinct settings, and we will refer to those as distance metric learning, introduced originally by Xing et al. (2003), and deep metric learning represented most notably by siamese networks (Bromley et al., 1994; Chopra et al., 2005; Hadsell et al., 2006). In the case of the former, one learns a so-called Mahalanobis distance which, given
x, y 2 Rd, will have the form: p (x y)|W (x y), where W 2 Rd⇥d is positive semidefinite. Learning is designed so that p (x y)|W (x y) will be small for semantically close x and y. Several extensions of the setting introduced by Xing et al. (2003) were proposed (Shalev-Shwartz et al., 2004; Globerson & Roweis, 2006; Weinberger & Saul, 2009; Ying & Li, 2012). For the case of deep metric learning, on the other hand, one’s interest is to learn a non-linear encoder E : RD 7! Rd, and often D d, so that some standard distance such as the one based on the L2 norm ||E(x) E(y)||2 will be small for semantically close x and y. While the two discussed settings are equivalent in their final goal, their focus is not the same: distance metric learning approaches focus on learning a semantically meaningful distance measure while deep metric learning methods focus on projecting the data onto a space where standard distances are meaningful.
A relatively recent research direction consists in the combination of the above described settings, i.e. jointly training an encoder along with a distance/similarity model. This is the case in (Koch et al., 2015) where a symmetric model was used to map the absolute difference of a pair of representations into a similarity score. In (Garcia & Vogiatzis, 2019), training of a distance/similarity model is done by imitation learning of cosine similarities measured between representations, which authors claim to simplify training compared to the direct use of cosine scores. In (Pitis et al., 2020), authors focus on distance models supporting asymmetric properties of the data, while still satisfying the triangle inequality. Learned Bregman divergences were evaluated by Cilingir et al. (2020), and completely unconstrained similarity models, in the sense that any property such as symmetry is imposed in the learned distance, were proposed in (Monteiro et al., 2020) for verification tasks. Learnable similarities parametrized by neural networks were further employed in (Wenliang et al., 2019; Liu et al., 2020) for the implementation of learned kernels, used to perform MMD-based (Gretton et al., 2012) 2-sample tests. For the case of few-shot learning under geometric approaches, so-called prototypical networks (Snell et al., 2017) follow a similar idea to that of metric learning in the sense that training consists of building a metric space where distances are indicative of properties of the data. However, that setting introduced the idea of partitioning the said space using class prototypes, i.e. a set of vectors representing each class, thus enabling its use to classification tasks since one can assign a test instance to the class corresponding to its closest prototype. That approach was also used under few-shot classification settings, in which case a new partitioning of the space is computed once small samples corresponding to new classes are presented to the model. We thus propose a strategy to extend the setting in (Monteiro et al., 2020) and include a partitioning with prototypes in the learned pseudo metric space so that the final model can be used to perform tasks such as multi-class and few-shot classification, while still supporting tasks involving pairwise comparisons.
3 SOLVING DIFFERENT TASKS THROUGH LEARNED SIMILARITIES
Assume (x, y) represents instances from X ⇥ Y , where X ✓ RD is the data space while Y is the space of labels, which will be always a discrete set in the cases considered herein. We thus consider the setting where the following components are to be learned: an encoder E : RD 7! Rd, a similarity model S : Rd ⇥ Rd 7! R, and a set of prototypes C 2 R|Y|⇥d. As it will be further discussed, such three components can be then used to perform different types of inference regarding properties of underlying data, and thus solve different tasks.
3.1 TRAINING
Training is carried out to enforce the following properties: i-the similarity as measured between a particular example and the prototype corresponding to its class labels should be high; ii-the similarities measured between examples from the same class should be high, while examples from different classes should yield a low similarity score. We thus design training objectives aimed at enforcing such properties. For the first property, we consider a training sample of size m and employ the standard multi-class cross-entropy criterion, but use the similarity measured between a training instance and each prototype as the set of logits as opposed to output layers defined by an affine transformation, commonly employed in standard classifiers, i.e. we perform maximum likelihood estimation on the categorical conditional distribution defined by:
P (Y|x0) = softmax(S(E(x0), C1:|Y|)), (1)
where Ci, i 2 [|Y|], indicates the prototype corresponding to class i. The corresponding training loss, denoted Lclass, will be then given by:
Lclass = 1
m
mX
i=1
log e S(E(xi),Cyi )
P|Y| j=1 e S(E(xi),Cj) , (2)
where xi and yi indicate the i-th training example. In order for the learned similarity to be meaningful for pairwise comparisons (i.e. the second property highlighted in the previous discussion), we make use of the following binary classification objective, used before in the settings discussed in (Auckenthaler et al., 2000) and (Monteiro et al., 2020), aimed at discriminating pairs of examples from the same and from different classes:
Lpair = 1 |T+| X x+2 T+ log( (S(E(x+)))) 1|T | X x 2 T log(1 (S(E(x )))), (3)
where stands for the logistic function, and x+ and x indicate pairs of examples denominated trials and denoted by T , i.e. T = {x0, x00}. The sums are taken over the set of positive or target trials T+ obtained from the training sample, i.e. those for which x0 and x00 belong to the same class, and the set of negative or non-target trials T . We further define the application of the encoder over a trial as E(T ) = {E(x0), E(x00)}
Initializing and updating the list of prototypes: C is initialized randomly such that its entries are i.i.d. sampled from a standard Gaussian distribution. We thus update C every iteration through a moving average given at iteration t by: Ct = Ct 1 + (1 )C⇤t 1, where 2 [0, 1] is a hyperparameter, and C⇤t 1 is a copy of Ct 1 where the rows corresponding to classes observed in the current minibatch are substituted by the average representations of each such classes.
Practical details: We now describe the design choices made so as to implement the discussed components and minimize L. Both E and S are implemented as neural networks, while C is a matrix where each row represents a prototype for a particular class.
Training is carried out with stochastic gradient descent with gradients estimated over minibatches of training data. In order to compute Lpair, each minibatch has to contain multiple examples from the same class otherwise T+ will be empty. We thus sample minibatches ensuring that is the case (c.f. appendix for further implementation details). We empirically observed including a standard classification loss accelerates convergence across all evaluations performed. We thus include a dense output layer to allow for computation of such a loss, which we denote by Laux. Training is thus carried out to minimize the total loss L = Lclass+Lpair+Laux. A highlevel training procedure is depicted in Algorithm 1 and illustrated in Figure 3 in the appendix. Algorithm 1 Training procedure. E ,S = InitializeModels() C = InitializePrototypes() repeat x, y = SampleMinibatch() z = E(x) C = UpdatePrototypes(z, y, C) z + = GetPositivePairs(z, y) z = GetNegativePairs(z, y) y 0 = DenseLayer(z) L = Lpair + Lclass + Laux E ,S = UpdateRule(E ,S,L) until Maximum number of iterations reached return E ,S, C
3.2 TESTING
We now define the set of tasks one can tackle using trained E , S , and C along with the inference mechanisms employed for each such task.
Multi-class classification: For the case where one is given a test instance x0 and desires to determine its class label y0, it will be given by the following classifier:
argmax i2[|Y|]
S(E(x0), Ci). (4)
Few-shot classification: If new classes are considered after training, repartitioning can be performed with few data points by creating a new set of prototypes C0 defined such that each entry corresponds to the average representation of each new class. Inference is thus performed following the schemed defined for multi-class classification.
Verification: Now assume one is given a trial {x0, x00} and desires to determine whether their respective labels are such that y0 = y00. One can then do the following:
S(E(x0),E(x00))>⌧ , (5)
where ⌧ is an user-defined decision threshold.
Retrieval: Given a test instance x0 and a gallery of instances denoted by X = {x1, x2, ..., xn} : xi 2 X 8 i 2 [n], determine k elements in X such that their labels match the underlying label y0 of x 0. The result will thus be:
k-argmax x002X
S(E(x0), E(x00)), (6)
where the operator k-argmax denotes repeating the argmax operator k times, removing the current result each time prior to the next argmax operation.
4 EVALUATION
Our goal is to devise a general framework which is able to perform different tasks. As such, the evaluation we decided to carry out consists of testing the proposed approach across a wide variety of tasks and modalities of data. Our evaluations consist of: multi-class classification, in which case we evaluate convolutional classifiers on MNIST (LeCun et al., 2010) and CIFAR-10 (Krizhevsky et al., 2009) and show improved robust accuracy. We further perform evaluation on object recognition tasks considering larger resolution images under domain shift. For that, we employ the standard PACS benchmark (Li et al., 2017) where we show that the prototypical classifiers with learned similarities introduced herein outperform recently introduced alternatives under domain shift, and mainly do so in the most challenging cases where a notable domain mismatch is observed (e.g. natural images vs. sketches). We then proceed to tasks that rely on pairwise comparisons of test instances and run evaluations on a large scale verification task on audio using the VoxCeleb corpus (Nagrani et al., 2017; Chung et al., 2018), and then evaluate our proposed approach on image retrieval tasks employing popular benchmarks such as CARS196 (Krause et al., 2013) and CUB200-2011 (Wah et al., 2011). Finally, the appendix contains evaluations discussing how to easily repartition the space so that new classes can be evaluated at test time, in which case we report experiments using miniImageNet (Vinyals et al., 2016). Ablations are further reported in the appendix using the full ImageNet (Deng et al., 2009) to show the importance of the use of the auxiliary classification loss.
We remark that the training procedure presented in Algorithm 1 is employed for training models used for all tasks discussed above, and no specialization to any task of interest is performed since we seek evidence regarding how effective the proposed approach is in yielding a general enough set of components (i.e. E , S , and C) which perform on pair or better than alternatives.
4.1 MULTI-CLASS CLASSIFICATION
4.1.1 ROBUSTNESS AGAINST ADVERSARIES
We report in Table 1 the accuracy obtained using convolutional classifiers trained on MNIST and CIFAR-10 considering both clean data as well as FGSM (Goodfellow et al., 2014) and PGD (Madry et al., 2017) attacks1 under L1 norm budgets, and considering the white-box access model so that the attacker has full access to the target model. Models trained using Algorithm 1 are compared against previously proposed defense strategies. Specifically, adversarial training (AT) (Madry et al., 2017), adversarial logit pairing (ALP) (Kannan et al., 2018), triplet loss adversarial training (TLA) (Mao et al., 2019), and TRADES (Zhang et al., 2019) are considered for comparison. The results given by an undefended model as reported by Mao et al. (2019) are further included for reference.
1Attacks were implemented using FoolBox: https://foolbox.readthedocs.io/en/stable/ index.html
A standard LeNet and the wide residual architecture introduced in (Madry et al., 2017) were employed for the cases of MNIST and CIFAR-10, respectively, and an attack budget of 0.3 and 8255 was considered when each such dataset was evaluated. We evaluated our models both with and without adversarial training, and report the results obtained when inference is performed using the scheme represented in expression 4, which we denote by SIM so as to indicate that the similarity model S is used for inference, as well as for the case where the auxiliary output layer used to compute Laux is used to predict label of test samples, which we indicate by DOL in a reference to dense output layer. In order to have a full white-box access model, each such output layer is exposed to the attacker in each evaluation so that attacks are created accounting for the specific inference procedure that will be used for testing. Based on the reported results, we verify that similarity/distance based inference is inherently less affected by small norm adversarial perturbations given that for both the cases of MNIST and CIFAR-10 with our undefended models the robust accuracy across considered attacks was much higher for the SIM case as compared to DOL as well as the undefended standard classifier. With adversarial training, both inference mechanisms yield higher performance than considered alternatives against several attackers, and more importantly, without affecting the clean accuracy as much as previous methods.
4.1.2 ROBUSTNESS UNDER DOMAIN SHIFT
We now assess the performance of the proposed classification strategy once domain shifts across train and test data occur. We do so by making use of the PACS domain-generalization benchmark (Li et al., 2017) consisting of 224x224 RGB images distributed into 7 classes and originated from four different domains: Photo (P), Art painting (A), Cartoon (C), and Sketch (S). We thus follow the leave-one-domain-out evaluation protocol such that data from three out of the four available domains are used for training while evaluation is carried out on the data from the left out domain. A comparison is carried out with recent methods specifically designed to enable out-of-distribution generalization introduced in (Dou et al., 2019) and (Chattopadhyay et al., 2020), as well as with the results reported in (Gulrajani & Lopez-Paz, 2020) where standard classifiers were evaluated against domain generalization approaches. Experiments were carried out using a ResNet-50 (He et al., 2016) pretrained on ImageNet.
Considering the average performance once each domain is left out, we once again observe improved robustness once similarity-based classification is employed when compared to both standard classifiers and domain generalization approaches, which indicates such classifiers rely less on domain factors that might correlate with labels on training domains. We hypothesize such a feature comes
from the metric learning framework used to train our models, i.e. domain information is less helpful when trying to minimize the combination of Lclass and Lpair, which renders resulting classifiers less dependent on the underlying domains used at training time. A gap in performance in our favor can be particularly observed for the evaluation cases where domains corresponding to cartoons and sketches are left out, given that such domains present a large discrepancy compared to the natural images that compose the bulk of training data. In fact, for the case photos on the other hand, in which case the underlying data correspond to natural images, the standard classifier discussed in (Gulrajani & Lopez-Paz, 2020) outperforms our model.
4.2 VERIFICATION
In order to perform evaluations that rely on pairwise comparisons of test instances, we consider the verification setting where trials corresponding to pairs of test instances are presented to the model. Its task is then to decide whether the examples in the trial belong to the same class. We thus make use of the VoxCeleb corpus (Nagrani et al., 2017; Chung et al., 2018) which consists of a large scale set of audio clips collected from videos of interviews available online.
We compared models trained on the second release of the corpus, which is composed of audio recordings from 5994 different speakers, against a set of published results on the three test partitions made available along with that release: (i)-VoxCeleb1 Test set, which correspond to data obtained from 40 speakers, (ii)-VoxCeleb1-Extended, which is given by the complete first release of the corpus and contains 1251 speakers, and (iii)-VoxCeleb1-Hard, which is made up of a subset of the data from VoxCeleb1-Extended yielding trials known to be hard to distinguish. We highlight that the set of speakers represented in all test partitions is disjoint to that relative to train data. The encoder E in this case corresponds to the 1-dimensional convolutional model introduced by Snyder et al. (2018). Details on the audio pre-processing and feature extraction are included in the appendix.
Results are reported in terms of equal error rate (EER) in Table 3. EER corresponds to any coordinate of the point in the detection error tradeoff curve where false positive and miss detection rates match (the lower the better). In this case, we take advantage of the fact that cosine similarities can be further used to compare representations of test trials, and observed combining it with the scores given by S by simply summing both similarities yields a further boost in performance.
4.3 IMAGE RETRIEVAL
We further verified the performance of the proposed approach on another set of tasks which require comparisons of pairs of examples. In this case, we considered the retrieval setting where a test example is compared against a gallery and k “similar” examples need to be selected from that gallery.
We thus make use of the CARS196 (Krause et al., 2013) and CUB200-2011 (Wah et al., 2011) datasets and closely follow the evaluation protocol discussed by Wu et al. (2017).
Results are reported in terms of Recall@k (Oh Song et al., 2016) (the higher the better), or R@k for short, and summarized in Figures 1 and 2, while the complete set of results is reported in Tables 7 and 8 in the appendix. Compared approaches consist of several metric learning methods designed for the retrieval problem. We use the indicators + and - to refer to the highest and lowest performances amongst the considered baselines. Results were obtained considering a ResNet-50 pretrained on ImageNet and fine tuned on each of the considered datasets. As a reference, we further report the results obtained by the pretrained model prior to fine tuning. We thus claim the proposed approach is competitive in that its performance lies close to the + line (which corresponds to a strong baseline using an ensemble of metrics approach (Sanakoyeu et al., 2019)) and outperforms most of the compared methods while using a much simpler and general training procedure requiring no special mining strategy of hard triplets, and using moderate batch sizes enabling practical training in single GPU hardware, as opposed to complex metric learning pipelines.
5@1 5@2 5@4 5@8 5@16
40%
50%
60%
70%
80%
90%
100%
Image1et 2urs - +
Figure 1: R@K evaluation of proposed methods on the CARS196 dataset. 5@1 5@2 5@4 5@8 5@16
40%
50%
60%
70%
80%
90%
100%
Image1et 2urs - +
Figure 2: R@K evaluation of proposed methods on the CUB200-2011 dataset.
5 CONCLUSION
We introduced a metric learning scheme which enables different types of tasks to be performed using the same set of models: an encoder responsible for embedding data into a lower dimensional space, a similarity model which outputs a similarity score given a pair of embeddings, and a set of class prototypes where each one represents a class observed during training. We then presented empirical evidence showing such a setting yields improvements in long standing issues such as adversarial robustness, since small perturbations in norm were observed to have a lesser effect on distance-based inference compared to standard classifiers, as well as robustness against distribution shifts, which indicates the posed training strategy is more effective in avoiding models that rely on correlations between training domain factors and labels, since domain information is not as helpful for such training scheme as it can be for the case of maximum likelihood estimation with standard classifiers. Moreover, performance across a set of tasks such as verification and image retrieval further showed classifiers defined under the proposed scheme perform competitively or better than alternatives designed targeting their particular applications, thus representing a step towards defining models that fit in the one model to solve them all category.
In terms of future work, we conjecture that the scope of the proposed setting can be further enlarged by utilizing the kernel function given by K = S , thus defining a strategy to learn kernels tailored to the data of interest, similarly to past work such as (Wenliang et al., 2019) and (Liu et al., 2020). Doing so might then be effective for performing even further tasks such as non-parametric 2-sample tests based on MMD scores (Gretton et al., 2012), as well as outlier/novelty detection, in which case approaches such one-class SVMs (Schölkopf et al., 2000) using the learned kernel can be considered. We further remark that, in order to define a Mercer’s kernel using K, one can then consider inducing symmetry through a kernel KS defined by KS = f(K(x0, x00),K(x00, x0)), where f : R2 7! R is symmetric, e.g. f(·, ·) = max(·, ·). | 1. What is the main contribution of the paper, and how does it extend the popular prototypical network for few-shot learning?
2. How does the proposed method incorporate a learned similarity function, triplet ranking loss, and vanilla fully-connected layer on top of the learned representations?
3. How does the reviewer assess the originality and significance of the work, and what are some related works in the literature?
4. What are the weaknesses of the quantitative results presented in the paper, and how do they fail to support the model choices and claimed contributions?
5. What are some missing references that could provide additional context for the reader? | Review | Review
This work tries to tackle the following question: if there is a multi-class classification approach which is more robust again adversaries and distribution shifts while supporting adaptation to novel classes. The solution proposed by the authors extends the popular prototypical network for few-shot learning with three modifications: 1) the proposed method uses a learned similarity function instead of a fixed distance function (squared Euclidean distance), 2) the incorporation of the triplet ranking loss, and 3) the vanilla fully-connected layer on top of the learned representations. The authors advocate the proposed method as a general solution to not only different problems including classification, verification and retrieval but also offering improved robustness against adversaries and distribution shifts. Although few-shot learning and inclusion of new classes was also listed as one of the contributions, but this can already be fulfilled by the original prototypical network.
In terms of originality and significance, this work is incremental. The extension of the prototypical network to multi-class classification problems has been considered in the literature (for example [2]) and a different way of learning the ``"prototypes" is also considered ([4,5,6] for example) where they are learned as parameters instead of class centroids. Importantly, they have shown that this method can be learned individually without the help from the auxiliary loss introduced in Section 3.1. On the other hand, the incorporation of the pair loss seems mainly for making the method more suitable for verification or retrieval without much relationship with the prototypical loss. However, [3] managed to marry this two together and showed improved results.
The quantitative results are not very convincing. First, there is no comparison between the direction application of prototypical network (as in [2]) with the proposed method to justify the incorporation of learned similarity, pair and auxiliary losses. The only comparison is in the Table 6 in the supplementary material. But it is hard to understand what "Ours - Cosine" and "Ours - Cosine + SIM" mean. More importantly, the authors should show the necessity of using them in order to be general-purpose. Second, the proposed method does not show promising robustness against adversarial attacks in Table 1. Indeed, it shows better defense comparing to the undefended model, but the performance still lags behind simple techniques like nearest neighbor and data augmentation. Third, there is no comparison with many existing metric learning methods which leverage the prototypical network ([3] for example).
In summary, the proposed method combines different techniques together trying to provide a general-purpose solution to multiple problems but fails to provide convincing quantitative results to support the model choices and the claimed contributions.
Missing references:
nearest neighbor could improve the robustness of pretrained image classifier, including [1] E. Orhan. A simple cache model for image recognition. NeurIPS 2018.
metric learning with prototypical loss, including [2] J. Wang, K.-C. Wang, M.T. Law, F. Rudzicz, and M. Brudno. Centroid-based deep metric learning for speaker recognition. ICASSP 2019, and [3] G. Doras, and G. Peeters. A prototypical triplet loss for cover detection. ICASSP 2020.
cosine classifiers, including [4] S. Gidaris, and N. Komodakis. Dynamic few-shot visual learning without forgetting. CVPR 2018, and [5] H. Qi, M. Brown, and D.G. Lowe. Low-shot learning with imprinted weights. CVPR 2018, and [6] S. Gidaris, A. Bursuc, N. Komodakis, P. Perez, and M. Cord. Boosting few-shot visual learning with self-supervision. ICCV 2019. |
ICLR | Title
Sample and Computation Redistribution for Efficient Face Detection
Abstract
Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these observations, we introduce two simple but effective methods: (1) Computation Redistribution (CR), which reallocates the computation between the backbone, neck and head of the model; and (2) Sample Redistribution (SR), which augments training samples for the most needed stages. The proposed Sample and Computation Redistribution for Face Detection (SCRFD) is implemented by a random search in a meticulously designed search space. Extensive experiments conducted on WIDER FACE demonstrate the state-of-the-art accuracy-efficiency trade-off for the proposed SCRFD family across a wide range of compute regimes. In particular, SCRFD-34GF outperforms the best competitor, TinaFace, by 4.78% (AP at hard set) while being more than 3× faster on GPUs with VGA-resolution images. Code is available at: https://github.com/deepinsight/insightface/ tree/master/detection/scrfd.
1 INTRODUCTION
Face detection is a long-standing problem in computer vision with many applications, such as face alignment (Bulat & Tzimiropoulos, 2017; Deng et al., 2019b), face reconstruction (Feng et al., 2018; Gecer et al., 2021), face attribute analysis (Zhang et al., 2018; Pan et al., 2018), and face recognition (Schroff et al., 2015; Deng et al., 2019a; 2020a). Following the pioneering work of (Viola & Jones, 2004), numerous face detection algorithms have been designed. Among them, the single-shot anchor-based approaches (Najibi et al., 2017; Zhang et al., 2017b; Tang et al., 2018; Li et al., 2019; Ming et al., 2019; Deng et al., 2020b; Liu et al., 2020; Zhu et al., 2020) have recently demonstrated very promising performance. In particular, on the most challenging face detection dataset, WIDER FACE (Yang et al., 2016), the average precision (AP) on its hard validation set has been boosted to 93.4% by TinaFace (Zhu et al., 2020).
Even though TinaFace (Zhu et al., 2020) achieves impressive results on unconstrained face detection, it employs large-scale (e.g. 1, 650 pixels) testing, which consumes huge amounts of computational resources. In addition, TinaFace design is based on a generic object detector (i.e. RetinaNet (Lin et al., 2017b)), directly taking the classification network as the backbone, tiling dense anchors on the multi-scale feature maps (i.e. P2 to P7 of neck), and adopting heavy head designs. Without considering the prior of faces, the network design of TinaFace is thus redundant and sub-optimal.
One approach of optimizing such networks’ performance is computation redistribution. Since directly taking the backbone of the classification network for object detection is sub-optimal, the recent CR-NAS (Liang et al., 2020) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field (ERF), leading to higher detection performance. In BFbox (Liu & Tang, 2020), a face-appropriate search space is designed, based on the observation of scale distribution gap between general object detection and face detection. In ASFD (Zhang et al.,
∗denotes equal contribution and corresponding author. InsightFace is a nonprofit Github project for 2D and 3D face analysis.
2020a), a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. Even though (Liu & Tang, 2020; Zhang et al., 2020a) have realized the limitation of directly applying general backbone, neck and head settings to face detection, CR-NAS (Liang et al., 2020) only focuses the optimization on backbone, BFbox (Liu & Tang, 2020) neglects the optimization of head, and ASFD (Zhang et al., 2020a) only explores the best design for neck.
Another optimization approach, is the sample redistribution across different scales. Due to the extremely large scale variance of faces in real-world scenarios, different scale augmentation strategies are employed to introduce scale adaptation into the face detector. The most widely used scale augmentation approaches include random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). Nevertheless, the scale augmentation parameters in these methods are manually designed for all different network structures. Therefore, traditional multi-scale training in face detection is also tedious and sub-optimal.
Since VGA resolution (640 × 480) is widely used for efficient face detection on numerous mobile phones and digital cameras, we focus on efficient face detection from low-resolution images in this paper. In Fig 1(a), we give the cumulative face scale distribution on the WIDER FACE validation dataset. Under the VGA resolution, most of the faces (78.93%) in WIDER FACE are smaller than 32×32 pixels. Under this specific scale distribution, both network structure and scale augmentation need to be optimized.
In this work, we present a meticulously designed methodology of search space optimization, that addresses both the redistribution between the backbone, neck and head, and the sample redistribution between the most needed scales. As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency, we first discover principles of computation distribution under different flop regimes. Inspired by (Radosavovic et al., 2020), we control the degrees of freedom and reduce the search space. More specifically, we randomly sample model architectures with different configurations on backbone (stem and four stages), neck and head. Based on the statistics of these models, we compute the empirical bootstrap (Efron & Tibshirani, 1994) and estimate the likely range in which the best models fall. To further decrease the complexity of the search space, we divide the computation ratio estimation for backbone and the whole detector into two steps. To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by discrete scales and binary probabilities. In experiments, the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes, even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig. 1(b).
To sum up, this paper makes following contributions:
• We have proposed a simplified search space, as well as a two-step search strategy for computation redistribution across different components (backbone, neck and head) of a face detector. The proposed computation redistribution method can easily boost detection performance through random search. • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation, which automatically redistributes more training samples for shallow stages, enhancing the detection performance on small faces. • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes.
2 RELATED WORK
Face Detection. To deal with extreme variations (e.g. scale, pose, illumination and occlusion) in face detection (Yang et al., 2016), most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement. SSH (Najibi et al., 2017) builds detection modules on different feature maps with a rich receptive field. S3FD (Zhang et al., 2017b) introduces an anchor compensation strategy by offsetting anchors for outer faces. PyramidBox (Tang et al., 2018) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data. DSFD (Li et al., 2019) introduces small faces supervision signals on the backbone, which implicitly boosts the performance of pyramid features. Group sampling (Ming et al., 2019) emphasizes the importance of the ratio for matched and unmatched anchors. RetinaFace (Deng et al., 2020b) employs deform-able context modules and additional landmark annotations to improve the performance of face detection. HAMBox (Liu et al., 2020) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces. BFbox (Liu & Tang, 2020) employs a single-path one-shot search method (Guo et al., 2019) to jointly optimize the backbone and neck for face detector. ASFD (Zhang et al., 2020a) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. All these methods are either designed by expert experience or partially optimized on backbone, neck and head. By contrast, we search for computation redistribution across different components (backbone, neck and head) of a face detector across a wide range of compute regimes.
Neural Architecture Search. Given a fixed search space of possible networks, Neural Architecture Search (NAS) automatically finds a good model within the search space. DetNAS (Chen et al., 2019b) adopts the evolution algorithm for the backbone search to boost object detection on COCO (Lin et al., 2014). By contrast, CR-NAS (Liang et al., 2020) reallocates the computation across different stages within the backbone to improve object detection. NAS-FPN (Ghiasi et al., 2019) uses reinforcement learning to search the proper FPN for general object detection. As there is an obvious distribution gap between COCO (Lin et al., 2014) and WIDER FACE (Yang et al., 2016), the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone, neck and head can be optimized to enhance the performance of face detection. Inspired by RegNet (Radosavovic et al., 2020), we optimize the computation distribution on backbone, neck and head based on the statistics from a group of random sampled models. We successfully reduce the search space and find the stable computation distribution under a particular complex regime, which significantly improves the model’s performance.
3 METHODOLOGY
To efficiently and accurately detect small faces from low-resolution images (e.g. VGA 640 × 480), we propose two methodologies that, when combined, outperform the state-of-the-art. In Sec. 3.1, we explore the computation redistribution across different stages of backbone, as well as different components (i.e. backbone, neck and head) of the whole detector, given a pre-defined computation budget. Then, in Sec. 3.2, we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations.
3.1 COMPUTATION REDISTRIBUTION
As illustrated in Fig. 2, we apply our search method on a network consisting of (1) RetinaNet (Lin et al., 2017a), with ResNet (He et al., 2016) as the backbone, (2) Path Aggregation Feature Pyramid Network (PAFPN) (Liu et al., 2018) as the neck, and (3) stacked 3 × 3 convolutional layers for the head. Despite the generally simple structure, the total number of possible network configurations of the search space becomes unwieldy. Therefore, we attempt to simplify the tremendous search space and arrive at a low-dimensional design space, consisting of simple and effective networks.
3.1.1 SEARCH SPACE DESIGN
Inspired by RegNet (Radosavovic et al., 2020), we explore the structures of face detectors, assuming fixed standard network blocks (i.e., basic residual or bottleneck blocks with a fixed bottleneck ratio of 4). In our case, the structure of a face detector includes:
• the backbone stem, three 3× 3 convolutional layers with w1 output channels (He et al., 2019a). • the backbone body, four stages (i.e. C2, C3, C4 and C5) operating at progressively reduced
resolution, with each stage consisting of a sequence of identical blocks. For each stage i, the degrees of freedom include the number of blocks di (i.e. network depth) and the block width wi (i.e. number of channels). • the neck, a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels (Liu et al., 2018).
• the head, with hi channels of m blocks to predict face scores and regress face boxes. The search space can be initially designed as follows. As the channel number of the stem is equal to the block width of the first residual block in C2, the degree of freedom of the stem w1 can be merged into w2. In addition, we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads. Therefore, we reduce the degrees of freedom to three within our neck and head design: (1) output channel number n for neck, (2) output channel number h for head, and (3) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256, h ≤ 256, and m ≤ 6 (both n and h are divisible by 8). The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters: the number of blocks di and block width wi. Following RegNet (Radosavovic et al., 2020), we perform uniform sampling of di ≤ 24 and wi ≤ 512 (wi is divisible by 8). As state-ofthe-art backbones have increasing widths (Radosavovic et al., 2020), we also constrain the search space, according to the principle of wi+1 ≥ wi.
3.1.2 ESTIMATION METRIC
Based on above simplifications, our search space becomes more compact and efficient. We repeat the random sampling in our search space until we obtain 320 models in our target complexity regime, and train each model on the WIDER FACE (Yang et al., 2016) training set for 80 epochs. Then, we test the Average Precision (AP) of each model on the validation set. Based on these 320 pairs of model statistics (xi, APi), where xi is the computation ratio of a particular component and APi the corresponding performance, we can compute the empirical bootstrap (Efron & Tibshirani, 1994) to estimate the likely range in which the best models fall. More specifically, we repeatedly sample with replacement 25% of the pairs for 103 times and select the pair with maximum AP in each sampling. Afterwards, we compute the 95% confidence interval for the maximum value and the median gives the most likely best computation ratio.
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Backbone)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(a) Backbone ∼ (67%, 88%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Neck)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(b) Neck ∼ (1%, 7%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Head)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(c) Head ∼ (10%, 26%)
3.1.3 TWO-STEP SEARCH
To further decrease the complexity of search space, we divide our network structure search into the following two steps: (1) CRFD1: search the computational distribution for the backbone only, while fixing the settings of the neck and head to the default configuration, and (2) CRFD2: search the computational distribution over the whole face detector (i.e. backbone, neck and head), with the computational distribution within the backbone, following the optimized CRFD1. By optimizing in both manners, we achieve the final optimized network design for the computation-constrained face detection. In the example below, we constrain CRFD to 2.5 GFlops (CRFD-2.5GF), in order to illustrate our two-step searching strategy.
Computation redistribution on backbone. For CRFD1-2.5GF, we fix the output channel of the neck at 32 and use two stacked 3 × 3 convolutions with 96 output channels. As the neck and head configurations do not change in the whole search process of CRFD1, we can easily find the best computation distribution of the backbone. As described in Fig. 3, we show the distribution of 320 model APs (on the WIDER FACE hard validation set) versus the computation ratio over each component (i.e. stem, C2, C3, C4 and C5) of backbone. After applying an empirical bootstrap (Efron & Tibshirani, 1994), a clear trend emerges, showing that the backbone computation is reallocated to the shallow stages (i.e. C2 and C3).
Computation redistribution on backbone, neck and head. In this step, we only keep the randomly generated network configurations whose backbone settings follow the computation distribution from CRFD1 as shown in Fig. 3. In this case, there are another three degrees of freedom (i.e. output channel number n for neck, output channel number h for head, and the number m of 3 × 3 convolutional layers in head). We repeat the random sampling in our search space, until we obtain 320 qualifying models in our target complexity regime (i.e. 2.5 GFlops). As evident in Fig. 4, most
of the computation is allocated in the backbone, with the head following and the neck having the lowest computation ratio. Fig. 4(d) also depicts the comparison between the hand-crafted model architecture and the computation redistributed network, under the constraint of 2.5 GFlops.
3.2 SAMPLE REDISTRIBUTION
As face detection features large scale variations (from several pixels to thousand pixels), there exist two widely used scale augmentation strategies, random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). In the random square crop strategy, square patches are cropped from the original image with a random size between [0.3, 1] of the short edge and then resized into 640 × 640 to generate larger training faces. By contrast, data anchor sampling strategy aims to generate more small scale faces by down-sampling the original image, bringing a large amount of padded area. Even though both random square crop and data anchor sampling can achieve promising results on the WIDER FACE dataset, the scale augmentation parameters are manually designed for all different network structures. Therefore, the training sample distribution on the feature pyramids can be sub-optimal for a particular network structure.
To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by the scale si and probability pi. The scale si represents the zooming ratio, sampled from a discrete set S = {smin, smin + 0.1, · · · , smax − 0.1, smax}. For a particular training image in each iteration, square patches are cropped from the original images with a zooming ratio si of the short edge of the original images. If the square patch is larger than the original image, average RGB values will fill the missing pixels. To shrink the scale search space, we employ a binary probability set pi ∈ {0, 1}. Under this setting, the probability-based scale search is simplified into a discrete scale sampling from a fixed set. As the interval of the discrete scale set is only 0.1, adjacent scales will have the probability of 1.0 to approximate a higher probability around a particular scale. In this paper, we employ random search under the estimation metric of AP on WIDER FACE to construct the best scale augmentation set. More specifically, we set smin = 0.1 and smax = 3.0. Then, we randomly select 8 to 20 discrete scale values to construct each scale augmentation set and train CRFD models under 320 different scale augmentation sets. Finally, the scale augmentation set with the highest detection performance is selected for optimized scale augmentation.
4 EXPERIMENTS
4.1 IMPLEMENTATION DETAILS
Training. For the scale augmentation, square patches are cropped from the original images with a random size from a pre-defined scale set, and then these patches are resized to 640 × 640 for training. Besides scale augmentation, the training data are also augmented by color distortion and random horizontal flipping, with a probability of 0.5. For the anchor setting, we tile anchors of {16, 32}, {64, 128}, and {256, 512} on the feature maps of stride 8, 16, and 32, respectively. The anchor ratio is set as 1.0. In this paper, we employ Adaptive Training Sample Selection (ATSS)
(Zhang et al., 2020b) for positive anchor matching. In the detection head, weight sharing and Group Normalization (Wu & He, 2018) are used. The losses of classification and regression branches are Generalized Focal Loss (GFL) (Li et al., 2020) and DIoU loss (Zheng et al., 2020), respectively.
Our experiments are implemented in PyTorch, based on the open-source MMDetection (Chen et al., 2019a). We adopt the SGD optimizer (momentum 0.9, weight decay 5e-4) with a batch size of 8×8 and train on eight Tesla V100. The learning rate is linearly warmed up to 0.015 within the first 3 epochs. During network search, the learning rate is multiplied by 0.1 at the 55-th, and 68-th epochs. The learning process terminates on the 80-th epoch. For training of both baselines and searched configurations, the learning rate decays by a factor of 10 at the 440-th and 544-th epochs, and the learning process terminates at the 640-th epoch. All the models are trained from scratch without any pre-training.
Testing. For fair comparisons with other methods, we employ three testing strategies, including single-scale VGA resolution (640 × 480), single-scale original resolution, and multi-scale testing. The results of DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), TinaFace (Zhu et al., 2020), Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b) are reported by testing the released models, while the HAMBox (Liu et al., 2020) and BFBox (Liu & Tang, 2020) models are shared from the author.
4.2 ABLATION STUDY
In Tab. 1, we present the performance of models on the WIDER FACE dataset by gradually including the proposed computation and sample redistribution methods. Our manually-designed baseline model, ResNet-2.5GF, gets APs of 91.87%, 89.49%, and 67.32% under three validation scenarios.
Computation redistribution. After separately employing the proposed computation redistribution on the backbone and the whole detector, the AP on the hard set improves to 69.78% and 70.98%. This indicates that (1) the network structure directly inherited from the classification task is suboptimal for the face detection task, and (2) joint computation reallocation on the backbone, neck and head outperforms computation optimization applied only on the backbone. Furthermore, the proposed two-step computation redistribution strategy achieves AP of 71.37%, surpassing one-step computation reallocation on the whole detector by 0.39%. As we shrink the whole search space by the proposed two-step strategy and our random model sampling number is fixed at 320, the twostep method is possible to find better network configurations from the large search space. In Tab. 1, we also compare our method with the single path one-shot NAS method (BFBox (Liu & Tang, 2020)) and the evolutionary search method (Appendix A.2), under the constraint of 2.5 GFlops. BFBox aims to design a face-appropriate search space by combing some excellent block designs, such as bottleneck block, densenet block and shufflenet block. However, such a combination generates a complex and redundant search space, which inevitably involves a vast body of low-performance candidate architectures. The evolutionary approach iteratively adopts mutations and crossover to gradually generate better architecture candidates from the randomly initialized search space, which also contains a large number of under-performing architectures. By contrast, CRFD-2.5GF utilizes an empirical bootstrap to estimate the optimized computation distribution of the best-performing architecture candidates, which directly eliminates the low-quality architectures from the initialized search
space. Therefore, CRFD-2.5GF can obviously outperform the BFBox and evolutionary method by 1.96% and 1.75% on the hard track.
Sample redistribution. For scale augmentation, we first manually extend the default scale set {0.3, 0.45, 0.6, 0.8, 1.0} by adding larger scales {1.2, 1.4, 1.6, 1.8, 2.0}. By adding this hand-crafted sample redistribution, the hard set APs significantly increase by 7.15% for the baseline and 6.5% for the proposed CRFD, indicating the benefit from allocating more training samples on the feature map of stride 8. By employing the optimized scale augmentation from searching, the hard set AP further increases by 0.48% for the proposed CRFD. For SCRFD-2.5GF, the best scale augmentation set searched is {0.5, 0.7, 0.8, 1.0, 1.1, 1.2, 1.4, 1.5, 1.8, 2.0, 2.3, 2.6}. As we can see from these discrete scales, faces around the original scale are preferred for training, along with an appropriate probability and ratio of zooming-out.
4.3 COMPUTATION REDISTRIBUTION ACROSS DIFFERENT COMPUTE REGIMES
Besides the complexity constraint of 2.5 GFlops, we also utilize the same two-step computation redistribution method to explore the network structure optimization for higher compute regimes (e.g. 10 GFlops and 34 GFlops) and lower compute regimes (e.g. 0.5 GFlops and 1.0 GFlops). In Fig. 5, we show the computation redistribution and the optimized network structures under different computation constraints.
Our final architectures have almost the same flops as the baseline networks. From these redistribution results, we can draw the following conclusions: (1) more computation is allocated in the backbone and the computation on the neck and head is compressed; (2) more capacity is reallocated in shallow stages due to the specific scale distribution on WIDER FACE; (3) for the high compute regime (e.g. 34 GFlops), the explored structure utilizes the bottleneck residual block and we observe significant depth scaling, instead of width scaling in shallow stages. Scaling the width is subject to over-fitting due to the larger increase in parameters (Bello et al., 2021). By contrast, scaling the depth, especially in the earlier layers, introduces fewer parameters compared to scaling the width; (4) for the mobile regime (0.5 GFlops), allocating the limited capacity in the deep stage (e.g. C5) for the discriminative features captured in the deep stage, can benefit the shallow small face detection by the top-down neck pathway.
4.4 ACCURACY AND EFFICIENCY COMPARISONS ON WIDER FACE
As shown in Tab. 2 and Tab. 3, we compared the proposed SCRFD with other state-of-the-art face detection algorithms (e.g. DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), BFBox (Liu & Tang, 2020), HAMBox (Liu et al., 2020) and TinaFace (Zhu et al., 2020)) as well as light-weight face methods (e.g. Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b)). Overall, all of the proposed SCRFD models provide considerable improvements compared to the hand-crafted baseline models (e.g. ResNet-2.5GF and MobileNet-0.5GF), by optimizing the network structure as well as the scale augmentation, across a wide range of compute regimes.
When we fix the testing scale at 640 as in Tab. 2, the proposed SCRFD-34GF outperforms all these state-of-the-art methods on the three subsets, especially for the hard track, which contains a large number of tiny faces. More specifically, SCRFD-34GF surpasses TinaFace by 4.78% while being more than 3× faster on GPUs. In addition, the computation cost of SCRFD-34GF is only around 20% of TinaFace. As SCRFD-34GF scales the depth in the earlier layers, it also introduces fewer parameters, resulting in a much smaller model size (9.80M ). Compared to the hand-crafted baseline (ResNet-34GF), the proposed computation redistribution and sample redistribution improve the AP by 1.27% and 0.92%, indicating the superiority of SCRFD over manual designs. Compared to the single path one-shot NAS method, SCRFD-34GF outperforms BFBox by 15.81%, while using a more compact model size. As the search space of BFBox is complex, there exists a large number of low-performance architectures. In addition, BFBox only searches the backbone and neck without considering the optimization on the head. For multi-scale testing, SCRFD-34GF slightly outperforms TinaFace but consumes much less computation. For the low-compute regimes in Tab. 3, SCRFD-0.5GF significantly outperforms RetinaFace-MobileNet0.25 by 21.19% on the hard AP, while consuming only 63.34% computation and 45.57% inference time under the VGA resolution. When the evaluation is conducted on the original image, SCRFD-0.5GF surpasses LFFD by 3.6% on the hard AP, while consuming only 5.5% flops.
5 CONCLUSIONS
In this work, we present a sample and computation redistribution paradigm for efficient face detection. Our results show significantly improved accuracy and efficiency trade-off by the proposed SCRFD across a wide range of compute regimes, when compared to the current state-of-the-art.
Acknowledgements. We would like to thank Hui Ni from Tencent for preparing the mobile demo of SCRFD https://github.com/nihui/ncnn-android-scrfd. Stefanos Zafeiriou acknowledges support from the EPSRC Fellowship DEFORM (EP/S010203/1), FACER2VM (EP/N007743/1) and a Google Faculty Fellowship.
A APPENDIX
A.1 TINAFACE REVISITED
Based on RetinaNet (Lin et al., 2017a), TinaFace (Zhu et al., 2020) employs ResNet-50 (He et al., 2016) as backbone, and Feature Pyramid Network (FPN) (Lin et al., 2017a) as neck to construct the feature extractor. For the head design, TinaFace first uses a feature enhancement module on each feature pyramid to learn surrounding context through different receptive fields in the inception block (Szegedy et al., 2015). Then, four consecutive 3 × 3 convolutional layers are appended on each feature pyramid. Focal loss (Lin et al., 2017b) is used for the classification branch, DIoU loss (Zheng et al., 2020) for the box regression branch and cross-entropy loss for the IoU prediction branch.
To detect tiny faces, TinaFace tiles anchors of three different scales, over each level of the FPN (i.e. {24/3, 25/3, 26/3} × {4, 8, 16, 32, 64, 128}, from level P2 to P7). The aspect ratio is set as 1.3. During training, square patches are cropped from the original image and resized to 640× 640, using a scaling factor randomly sampled from [0.3, 0.45, 0.6, 0.8, 1.0], multiplied by the length of the original image’s short edge. During testing, TinaFace employs single scale testing, when the short and long edges of the image do not surpass [1100, 1650]. Otherwise, it employs with short edge scaling at [500, 800, 1100, 1400, 1700], shift with directions [(0, 0), (0, 1), (1, 0), (1, 1)] and horizontal flip.
As shown in Fig. 6(a) and Tab. 4, we compare the performance of TinaFace under different testing scales. For the multi-scale testing, TinaFace achieves an impressive AP of 93.4%, which is the current best performance on the WIDER FACE leader-board. For large single-scale testing (1650), the AP slightly drops at 93.0% but the computation significantly decreases to 1021.82 GFlops. On the original scale (1024), the performance of TinaFace is still very high, obtaining an AP of 91.4% with 508.47 GFlops. Moreover, when the testing scale decreases to VGA level (640), the AP significantly reduces to 81.4%, with the computation further decreasing at 172.95 GFlops.
In Fig. 6(b), we illustrate the computation distribution of TinaFace on the backbone, neck and head components with a testing scale of 640. From the view of different scales of the feature pyramid, the majority of the computational costs (about 68%) are from stride 4, as the resolution of feature map is quite large (120×160). From the view of the different components of the face detector, most of the computational costs (about 79%) are from the head, since the backbone structure is directly borrowed from the ImageNet classification task (Deng et al., 2009), without any modification.
Even though TinaFace achieves state-of-the-art performance on tiny face detection, the heavy computational cost renders it unsuitable for real-time applications.
A.2 DETAILS OF EVOLUTIONARY BASELINE
To compare the proposed SCRFD with the other network search methods in Tab. 1, we design the evolutionary baseline (Real et al., 2019) as follows:
1. A population of networks P are randomly initialized. We set |P| = 50. 2. Each network architecture from P is trained on the WIDER FACE training data and then
the APs on the WIDER FACE validation dataset are tested.
3. Architectures with top performance are selected from P as parents P . To generate child networks C, we employ the mutation and crossover policies. Here, we set |P| = 10 and |C| = 50.
4. Each network architecture from C is trained on the WIDER FACE training data and then the APs on the WIDER FACE validation dataset are calculated.
5. The worst 50 individuals from the populations of P ∪C are dropped and then we get the new evolutionary population P.
6. We repeat steps 3, 4 and 5 for 20 times, resulting in 1000 network architectures as well as their validation APs. The architecture with the highest AP is selected as the final result.
A.3 ALGORITHM OF COMPUTATION REDISTRIBUTION
In Algorithm 1, we show the details of the proposed two-step computation redistribution method.
Algorithm 1: Search algorithm for computation redistribution Input: Constraint of computation cost Y (in GFlops); Number of random network
architectures N ; Dataset for training Dtrain and validation Dval; Evaluation metric AP . Output: Best architecture A∗ Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({di, wi}) ; /* di and wi denote block number and channel number, i = 2, 3, 4, 5. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR1 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({CR1, n,m, h}) ; /* n,m, and h denote channel in neck, block and channel in head. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR2 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Output: Best architecture A∗ = choose top1(CR2,A).
A.4 DETAILED NETWORK CONFIGURATIONS
In Tab. 5, we give the detailed network configurations for baselines and the proposed CRFD across different compute regimes.
A.5 STATISTICS AFTER SAMPLE REDISTRIBUTION
As illustrated in Fig. 7(a), there are more faces below the scale of 32 after the proposed automatic scale augmentation strategy is used. Moreover, even though there will be more extremely tiny faces (e.g. < 4 × 4) under the proposed scale augmentation, these ground-truth faces will be neglected during training due to unsuccessful anchor matching. As shown in Fig. 7(b), positive anchors within one epoch significantly increase at the scale of 16 and 32. With more training samples redistributed to the small scale, the branch to detect tiny faces can be trained more adequately.
A.6 DATASETS
WIDER FACE The WIDER FACE dataset (Yang et al., 2016) consists of 32, 203 images and 393, 703 face bounding boxes with a high degree of variability in scale, pose, expression, occlusion and illumination. The WIDER FACE dataset is split into training (40%), validation (10%) and testing (50%) subsets by randomly sampling from 61 scene categories. Based on the detection rate of EdgeBox (Zitnick & Dollár, 2014), three levels of difficulty (i.e. Easy, Medium and Hard) are defined by incrementally incorporating hard samples.
AFW The AFW dataset (Zhu & Ramanan, 2012) contains 205 high-resolution images with 473 faces (Mathias et al., 2014) collected from Flickr. Images in this dataset contain cluttered backgrounds with large variations in viewpoint.
PASCAL The PASCAL face dataset (Mathias et al., 2014) is collected from the PASCAL 2012 person layout subset, includes 1, 335 labeled faces in 851 images with large facial appearance and pose variations (e.g. large in-plane rotation).
FDDB The FDDB dataset (Jain & Learned-Miller, 2010) is a collection of labeled faces from Faces in the Wild dataset. It contains a total of 5, 171 face annotations on 2, 845 images. The dataset incorporates a range of challenges, including difficult pose angles, out-of-focus faces and low-resolution.
A.7 CROSS DATASET EVALUATION AND VISUALIZATION
Besides the evaluation on the WIDER FACE (Yang et al., 2016) data set, we also conduct cross dataset evaluation and test the proposed SCRFD models on AFW (Zhu & Ramanan, 2012), PASCAL (Mathias et al., 2014) and FDDB (Jain & Learned-Miller, 2010), under the VGA resolution. As shown in Fig 8, SCRFD-34GF achieves 99.945% AP on AFW, 99.597% AP on PASCAL, and 99.25% on FDDB, surpassing BFBox (Liu & Tang, 2020) and HAMBox (Liu et al., 2020). Even though the face scale distributions on these three datasets are different from WIDER FACE, the proposed SCRFD-34GF still obtains state-of-the-art performance across different datasets, showing impressive robustness of the proposed computation and sample redistribution approaches. In addition, SCRFD-2.5GF also obtains impressive performance on different datasets with much lower computation cost (99.821% AP on AFW, 98.911% AP on PASCAL, and 99.02% AP on FDDB).
Fig. 9 shows qualitative results generated by SCRFD-2.5GF. As can be seen, our face detector works very well in both indoor and outdoor crowded scenes under different conditions (e.g. appearance variations from pose, occlusion and illumination). The impressive performance across a wide range of scales indicate that SCRFD-2.5GF has a very high recall and can detect faces accurately even without large scale testing. | 1. What is the main contribution of the paper in the field of face detection?
2. What are the strengths of the proposed approach, particularly in terms of its effectiveness in enhancing detection performance on small faces?
3. What are the weaknesses of the paper, especially regarding the search strategies implemented and the lack of comparisons with other network search methods?
4. How does the reviewer assess the significance of the paper's contributions and its potential impact on future research in face detection? | Summary Of The Paper
Review | Summary Of The Paper
The author is motivated by two simple but effective methods: 1) Computation Redistribution (CR) which reallocates the computation between the backbone, neck and head calculation; 2) Sample Redistribution (SR) augments training samples for the most needed stages. The author uses a simplified search space for computation redistribution across different components and designs a searchable zoom-in and zoom-out space for face-specific scale augmentation. The SCRFD-34GF yields state-of-the-art performances in many datasets(e.g. WIDER FACE).
Review
Strength
The paper indicates that most of the faces (78.93%) in WIDER FACE are smaller than 32×32 pixels. Under this specific scale distribution, both network structure and scale augmentation need to be optimized. The experiments show that the SCRFD indeed enhances the detection performance on small faces.
Both of the proposed CR and SR are effective and achieve great improvements on well-known datasets. The author provided a detailed ablation study that showed that they improved the detection accuracy than the baseline.
Weakness
The search strategies(CR) implemented are rather straightforward and not interesting.
The experiments mainly focus on comparison with some general detectors. There is almost no comparison experiment with some well-known network search methods. So I am not convinced whether the improvements are mainly due to network search or your contribution. |
ICLR | Title
Sample and Computation Redistribution for Efficient Face Detection
Abstract
Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these observations, we introduce two simple but effective methods: (1) Computation Redistribution (CR), which reallocates the computation between the backbone, neck and head of the model; and (2) Sample Redistribution (SR), which augments training samples for the most needed stages. The proposed Sample and Computation Redistribution for Face Detection (SCRFD) is implemented by a random search in a meticulously designed search space. Extensive experiments conducted on WIDER FACE demonstrate the state-of-the-art accuracy-efficiency trade-off for the proposed SCRFD family across a wide range of compute regimes. In particular, SCRFD-34GF outperforms the best competitor, TinaFace, by 4.78% (AP at hard set) while being more than 3× faster on GPUs with VGA-resolution images. Code is available at: https://github.com/deepinsight/insightface/ tree/master/detection/scrfd.
1 INTRODUCTION
Face detection is a long-standing problem in computer vision with many applications, such as face alignment (Bulat & Tzimiropoulos, 2017; Deng et al., 2019b), face reconstruction (Feng et al., 2018; Gecer et al., 2021), face attribute analysis (Zhang et al., 2018; Pan et al., 2018), and face recognition (Schroff et al., 2015; Deng et al., 2019a; 2020a). Following the pioneering work of (Viola & Jones, 2004), numerous face detection algorithms have been designed. Among them, the single-shot anchor-based approaches (Najibi et al., 2017; Zhang et al., 2017b; Tang et al., 2018; Li et al., 2019; Ming et al., 2019; Deng et al., 2020b; Liu et al., 2020; Zhu et al., 2020) have recently demonstrated very promising performance. In particular, on the most challenging face detection dataset, WIDER FACE (Yang et al., 2016), the average precision (AP) on its hard validation set has been boosted to 93.4% by TinaFace (Zhu et al., 2020).
Even though TinaFace (Zhu et al., 2020) achieves impressive results on unconstrained face detection, it employs large-scale (e.g. 1, 650 pixels) testing, which consumes huge amounts of computational resources. In addition, TinaFace design is based on a generic object detector (i.e. RetinaNet (Lin et al., 2017b)), directly taking the classification network as the backbone, tiling dense anchors on the multi-scale feature maps (i.e. P2 to P7 of neck), and adopting heavy head designs. Without considering the prior of faces, the network design of TinaFace is thus redundant and sub-optimal.
One approach of optimizing such networks’ performance is computation redistribution. Since directly taking the backbone of the classification network for object detection is sub-optimal, the recent CR-NAS (Liang et al., 2020) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field (ERF), leading to higher detection performance. In BFbox (Liu & Tang, 2020), a face-appropriate search space is designed, based on the observation of scale distribution gap between general object detection and face detection. In ASFD (Zhang et al.,
∗denotes equal contribution and corresponding author. InsightFace is a nonprofit Github project for 2D and 3D face analysis.
2020a), a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. Even though (Liu & Tang, 2020; Zhang et al., 2020a) have realized the limitation of directly applying general backbone, neck and head settings to face detection, CR-NAS (Liang et al., 2020) only focuses the optimization on backbone, BFbox (Liu & Tang, 2020) neglects the optimization of head, and ASFD (Zhang et al., 2020a) only explores the best design for neck.
Another optimization approach, is the sample redistribution across different scales. Due to the extremely large scale variance of faces in real-world scenarios, different scale augmentation strategies are employed to introduce scale adaptation into the face detector. The most widely used scale augmentation approaches include random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). Nevertheless, the scale augmentation parameters in these methods are manually designed for all different network structures. Therefore, traditional multi-scale training in face detection is also tedious and sub-optimal.
Since VGA resolution (640 × 480) is widely used for efficient face detection on numerous mobile phones and digital cameras, we focus on efficient face detection from low-resolution images in this paper. In Fig 1(a), we give the cumulative face scale distribution on the WIDER FACE validation dataset. Under the VGA resolution, most of the faces (78.93%) in WIDER FACE are smaller than 32×32 pixels. Under this specific scale distribution, both network structure and scale augmentation need to be optimized.
In this work, we present a meticulously designed methodology of search space optimization, that addresses both the redistribution between the backbone, neck and head, and the sample redistribution between the most needed scales. As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency, we first discover principles of computation distribution under different flop regimes. Inspired by (Radosavovic et al., 2020), we control the degrees of freedom and reduce the search space. More specifically, we randomly sample model architectures with different configurations on backbone (stem and four stages), neck and head. Based on the statistics of these models, we compute the empirical bootstrap (Efron & Tibshirani, 1994) and estimate the likely range in which the best models fall. To further decrease the complexity of the search space, we divide the computation ratio estimation for backbone and the whole detector into two steps. To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by discrete scales and binary probabilities. In experiments, the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes, even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig. 1(b).
To sum up, this paper makes following contributions:
• We have proposed a simplified search space, as well as a two-step search strategy for computation redistribution across different components (backbone, neck and head) of a face detector. The proposed computation redistribution method can easily boost detection performance through random search. • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation, which automatically redistributes more training samples for shallow stages, enhancing the detection performance on small faces. • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes.
2 RELATED WORK
Face Detection. To deal with extreme variations (e.g. scale, pose, illumination and occlusion) in face detection (Yang et al., 2016), most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement. SSH (Najibi et al., 2017) builds detection modules on different feature maps with a rich receptive field. S3FD (Zhang et al., 2017b) introduces an anchor compensation strategy by offsetting anchors for outer faces. PyramidBox (Tang et al., 2018) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data. DSFD (Li et al., 2019) introduces small faces supervision signals on the backbone, which implicitly boosts the performance of pyramid features. Group sampling (Ming et al., 2019) emphasizes the importance of the ratio for matched and unmatched anchors. RetinaFace (Deng et al., 2020b) employs deform-able context modules and additional landmark annotations to improve the performance of face detection. HAMBox (Liu et al., 2020) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces. BFbox (Liu & Tang, 2020) employs a single-path one-shot search method (Guo et al., 2019) to jointly optimize the backbone and neck for face detector. ASFD (Zhang et al., 2020a) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. All these methods are either designed by expert experience or partially optimized on backbone, neck and head. By contrast, we search for computation redistribution across different components (backbone, neck and head) of a face detector across a wide range of compute regimes.
Neural Architecture Search. Given a fixed search space of possible networks, Neural Architecture Search (NAS) automatically finds a good model within the search space. DetNAS (Chen et al., 2019b) adopts the evolution algorithm for the backbone search to boost object detection on COCO (Lin et al., 2014). By contrast, CR-NAS (Liang et al., 2020) reallocates the computation across different stages within the backbone to improve object detection. NAS-FPN (Ghiasi et al., 2019) uses reinforcement learning to search the proper FPN for general object detection. As there is an obvious distribution gap between COCO (Lin et al., 2014) and WIDER FACE (Yang et al., 2016), the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone, neck and head can be optimized to enhance the performance of face detection. Inspired by RegNet (Radosavovic et al., 2020), we optimize the computation distribution on backbone, neck and head based on the statistics from a group of random sampled models. We successfully reduce the search space and find the stable computation distribution under a particular complex regime, which significantly improves the model’s performance.
3 METHODOLOGY
To efficiently and accurately detect small faces from low-resolution images (e.g. VGA 640 × 480), we propose two methodologies that, when combined, outperform the state-of-the-art. In Sec. 3.1, we explore the computation redistribution across different stages of backbone, as well as different components (i.e. backbone, neck and head) of the whole detector, given a pre-defined computation budget. Then, in Sec. 3.2, we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations.
3.1 COMPUTATION REDISTRIBUTION
As illustrated in Fig. 2, we apply our search method on a network consisting of (1) RetinaNet (Lin et al., 2017a), with ResNet (He et al., 2016) as the backbone, (2) Path Aggregation Feature Pyramid Network (PAFPN) (Liu et al., 2018) as the neck, and (3) stacked 3 × 3 convolutional layers for the head. Despite the generally simple structure, the total number of possible network configurations of the search space becomes unwieldy. Therefore, we attempt to simplify the tremendous search space and arrive at a low-dimensional design space, consisting of simple and effective networks.
3.1.1 SEARCH SPACE DESIGN
Inspired by RegNet (Radosavovic et al., 2020), we explore the structures of face detectors, assuming fixed standard network blocks (i.e., basic residual or bottleneck blocks with a fixed bottleneck ratio of 4). In our case, the structure of a face detector includes:
• the backbone stem, three 3× 3 convolutional layers with w1 output channels (He et al., 2019a). • the backbone body, four stages (i.e. C2, C3, C4 and C5) operating at progressively reduced
resolution, with each stage consisting of a sequence of identical blocks. For each stage i, the degrees of freedom include the number of blocks di (i.e. network depth) and the block width wi (i.e. number of channels). • the neck, a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels (Liu et al., 2018).
• the head, with hi channels of m blocks to predict face scores and regress face boxes. The search space can be initially designed as follows. As the channel number of the stem is equal to the block width of the first residual block in C2, the degree of freedom of the stem w1 can be merged into w2. In addition, we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads. Therefore, we reduce the degrees of freedom to three within our neck and head design: (1) output channel number n for neck, (2) output channel number h for head, and (3) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256, h ≤ 256, and m ≤ 6 (both n and h are divisible by 8). The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters: the number of blocks di and block width wi. Following RegNet (Radosavovic et al., 2020), we perform uniform sampling of di ≤ 24 and wi ≤ 512 (wi is divisible by 8). As state-ofthe-art backbones have increasing widths (Radosavovic et al., 2020), we also constrain the search space, according to the principle of wi+1 ≥ wi.
3.1.2 ESTIMATION METRIC
Based on above simplifications, our search space becomes more compact and efficient. We repeat the random sampling in our search space until we obtain 320 models in our target complexity regime, and train each model on the WIDER FACE (Yang et al., 2016) training set for 80 epochs. Then, we test the Average Precision (AP) of each model on the validation set. Based on these 320 pairs of model statistics (xi, APi), where xi is the computation ratio of a particular component and APi the corresponding performance, we can compute the empirical bootstrap (Efron & Tibshirani, 1994) to estimate the likely range in which the best models fall. More specifically, we repeatedly sample with replacement 25% of the pairs for 103 times and select the pair with maximum AP in each sampling. Afterwards, we compute the 95% confidence interval for the maximum value and the median gives the most likely best computation ratio.
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Backbone)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(a) Backbone ∼ (67%, 88%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Neck)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(b) Neck ∼ (1%, 7%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Head)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(c) Head ∼ (10%, 26%)
3.1.3 TWO-STEP SEARCH
To further decrease the complexity of search space, we divide our network structure search into the following two steps: (1) CRFD1: search the computational distribution for the backbone only, while fixing the settings of the neck and head to the default configuration, and (2) CRFD2: search the computational distribution over the whole face detector (i.e. backbone, neck and head), with the computational distribution within the backbone, following the optimized CRFD1. By optimizing in both manners, we achieve the final optimized network design for the computation-constrained face detection. In the example below, we constrain CRFD to 2.5 GFlops (CRFD-2.5GF), in order to illustrate our two-step searching strategy.
Computation redistribution on backbone. For CRFD1-2.5GF, we fix the output channel of the neck at 32 and use two stacked 3 × 3 convolutions with 96 output channels. As the neck and head configurations do not change in the whole search process of CRFD1, we can easily find the best computation distribution of the backbone. As described in Fig. 3, we show the distribution of 320 model APs (on the WIDER FACE hard validation set) versus the computation ratio over each component (i.e. stem, C2, C3, C4 and C5) of backbone. After applying an empirical bootstrap (Efron & Tibshirani, 1994), a clear trend emerges, showing that the backbone computation is reallocated to the shallow stages (i.e. C2 and C3).
Computation redistribution on backbone, neck and head. In this step, we only keep the randomly generated network configurations whose backbone settings follow the computation distribution from CRFD1 as shown in Fig. 3. In this case, there are another three degrees of freedom (i.e. output channel number n for neck, output channel number h for head, and the number m of 3 × 3 convolutional layers in head). We repeat the random sampling in our search space, until we obtain 320 qualifying models in our target complexity regime (i.e. 2.5 GFlops). As evident in Fig. 4, most
of the computation is allocated in the backbone, with the head following and the neck having the lowest computation ratio. Fig. 4(d) also depicts the comparison between the hand-crafted model architecture and the computation redistributed network, under the constraint of 2.5 GFlops.
3.2 SAMPLE REDISTRIBUTION
As face detection features large scale variations (from several pixels to thousand pixels), there exist two widely used scale augmentation strategies, random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). In the random square crop strategy, square patches are cropped from the original image with a random size between [0.3, 1] of the short edge and then resized into 640 × 640 to generate larger training faces. By contrast, data anchor sampling strategy aims to generate more small scale faces by down-sampling the original image, bringing a large amount of padded area. Even though both random square crop and data anchor sampling can achieve promising results on the WIDER FACE dataset, the scale augmentation parameters are manually designed for all different network structures. Therefore, the training sample distribution on the feature pyramids can be sub-optimal for a particular network structure.
To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by the scale si and probability pi. The scale si represents the zooming ratio, sampled from a discrete set S = {smin, smin + 0.1, · · · , smax − 0.1, smax}. For a particular training image in each iteration, square patches are cropped from the original images with a zooming ratio si of the short edge of the original images. If the square patch is larger than the original image, average RGB values will fill the missing pixels. To shrink the scale search space, we employ a binary probability set pi ∈ {0, 1}. Under this setting, the probability-based scale search is simplified into a discrete scale sampling from a fixed set. As the interval of the discrete scale set is only 0.1, adjacent scales will have the probability of 1.0 to approximate a higher probability around a particular scale. In this paper, we employ random search under the estimation metric of AP on WIDER FACE to construct the best scale augmentation set. More specifically, we set smin = 0.1 and smax = 3.0. Then, we randomly select 8 to 20 discrete scale values to construct each scale augmentation set and train CRFD models under 320 different scale augmentation sets. Finally, the scale augmentation set with the highest detection performance is selected for optimized scale augmentation.
4 EXPERIMENTS
4.1 IMPLEMENTATION DETAILS
Training. For the scale augmentation, square patches are cropped from the original images with a random size from a pre-defined scale set, and then these patches are resized to 640 × 640 for training. Besides scale augmentation, the training data are also augmented by color distortion and random horizontal flipping, with a probability of 0.5. For the anchor setting, we tile anchors of {16, 32}, {64, 128}, and {256, 512} on the feature maps of stride 8, 16, and 32, respectively. The anchor ratio is set as 1.0. In this paper, we employ Adaptive Training Sample Selection (ATSS)
(Zhang et al., 2020b) for positive anchor matching. In the detection head, weight sharing and Group Normalization (Wu & He, 2018) are used. The losses of classification and regression branches are Generalized Focal Loss (GFL) (Li et al., 2020) and DIoU loss (Zheng et al., 2020), respectively.
Our experiments are implemented in PyTorch, based on the open-source MMDetection (Chen et al., 2019a). We adopt the SGD optimizer (momentum 0.9, weight decay 5e-4) with a batch size of 8×8 and train on eight Tesla V100. The learning rate is linearly warmed up to 0.015 within the first 3 epochs. During network search, the learning rate is multiplied by 0.1 at the 55-th, and 68-th epochs. The learning process terminates on the 80-th epoch. For training of both baselines and searched configurations, the learning rate decays by a factor of 10 at the 440-th and 544-th epochs, and the learning process terminates at the 640-th epoch. All the models are trained from scratch without any pre-training.
Testing. For fair comparisons with other methods, we employ three testing strategies, including single-scale VGA resolution (640 × 480), single-scale original resolution, and multi-scale testing. The results of DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), TinaFace (Zhu et al., 2020), Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b) are reported by testing the released models, while the HAMBox (Liu et al., 2020) and BFBox (Liu & Tang, 2020) models are shared from the author.
4.2 ABLATION STUDY
In Tab. 1, we present the performance of models on the WIDER FACE dataset by gradually including the proposed computation and sample redistribution methods. Our manually-designed baseline model, ResNet-2.5GF, gets APs of 91.87%, 89.49%, and 67.32% under three validation scenarios.
Computation redistribution. After separately employing the proposed computation redistribution on the backbone and the whole detector, the AP on the hard set improves to 69.78% and 70.98%. This indicates that (1) the network structure directly inherited from the classification task is suboptimal for the face detection task, and (2) joint computation reallocation on the backbone, neck and head outperforms computation optimization applied only on the backbone. Furthermore, the proposed two-step computation redistribution strategy achieves AP of 71.37%, surpassing one-step computation reallocation on the whole detector by 0.39%. As we shrink the whole search space by the proposed two-step strategy and our random model sampling number is fixed at 320, the twostep method is possible to find better network configurations from the large search space. In Tab. 1, we also compare our method with the single path one-shot NAS method (BFBox (Liu & Tang, 2020)) and the evolutionary search method (Appendix A.2), under the constraint of 2.5 GFlops. BFBox aims to design a face-appropriate search space by combing some excellent block designs, such as bottleneck block, densenet block and shufflenet block. However, such a combination generates a complex and redundant search space, which inevitably involves a vast body of low-performance candidate architectures. The evolutionary approach iteratively adopts mutations and crossover to gradually generate better architecture candidates from the randomly initialized search space, which also contains a large number of under-performing architectures. By contrast, CRFD-2.5GF utilizes an empirical bootstrap to estimate the optimized computation distribution of the best-performing architecture candidates, which directly eliminates the low-quality architectures from the initialized search
space. Therefore, CRFD-2.5GF can obviously outperform the BFBox and evolutionary method by 1.96% and 1.75% on the hard track.
Sample redistribution. For scale augmentation, we first manually extend the default scale set {0.3, 0.45, 0.6, 0.8, 1.0} by adding larger scales {1.2, 1.4, 1.6, 1.8, 2.0}. By adding this hand-crafted sample redistribution, the hard set APs significantly increase by 7.15% for the baseline and 6.5% for the proposed CRFD, indicating the benefit from allocating more training samples on the feature map of stride 8. By employing the optimized scale augmentation from searching, the hard set AP further increases by 0.48% for the proposed CRFD. For SCRFD-2.5GF, the best scale augmentation set searched is {0.5, 0.7, 0.8, 1.0, 1.1, 1.2, 1.4, 1.5, 1.8, 2.0, 2.3, 2.6}. As we can see from these discrete scales, faces around the original scale are preferred for training, along with an appropriate probability and ratio of zooming-out.
4.3 COMPUTATION REDISTRIBUTION ACROSS DIFFERENT COMPUTE REGIMES
Besides the complexity constraint of 2.5 GFlops, we also utilize the same two-step computation redistribution method to explore the network structure optimization for higher compute regimes (e.g. 10 GFlops and 34 GFlops) and lower compute regimes (e.g. 0.5 GFlops and 1.0 GFlops). In Fig. 5, we show the computation redistribution and the optimized network structures under different computation constraints.
Our final architectures have almost the same flops as the baseline networks. From these redistribution results, we can draw the following conclusions: (1) more computation is allocated in the backbone and the computation on the neck and head is compressed; (2) more capacity is reallocated in shallow stages due to the specific scale distribution on WIDER FACE; (3) for the high compute regime (e.g. 34 GFlops), the explored structure utilizes the bottleneck residual block and we observe significant depth scaling, instead of width scaling in shallow stages. Scaling the width is subject to over-fitting due to the larger increase in parameters (Bello et al., 2021). By contrast, scaling the depth, especially in the earlier layers, introduces fewer parameters compared to scaling the width; (4) for the mobile regime (0.5 GFlops), allocating the limited capacity in the deep stage (e.g. C5) for the discriminative features captured in the deep stage, can benefit the shallow small face detection by the top-down neck pathway.
4.4 ACCURACY AND EFFICIENCY COMPARISONS ON WIDER FACE
As shown in Tab. 2 and Tab. 3, we compared the proposed SCRFD with other state-of-the-art face detection algorithms (e.g. DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), BFBox (Liu & Tang, 2020), HAMBox (Liu et al., 2020) and TinaFace (Zhu et al., 2020)) as well as light-weight face methods (e.g. Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b)). Overall, all of the proposed SCRFD models provide considerable improvements compared to the hand-crafted baseline models (e.g. ResNet-2.5GF and MobileNet-0.5GF), by optimizing the network structure as well as the scale augmentation, across a wide range of compute regimes.
When we fix the testing scale at 640 as in Tab. 2, the proposed SCRFD-34GF outperforms all these state-of-the-art methods on the three subsets, especially for the hard track, which contains a large number of tiny faces. More specifically, SCRFD-34GF surpasses TinaFace by 4.78% while being more than 3× faster on GPUs. In addition, the computation cost of SCRFD-34GF is only around 20% of TinaFace. As SCRFD-34GF scales the depth in the earlier layers, it also introduces fewer parameters, resulting in a much smaller model size (9.80M ). Compared to the hand-crafted baseline (ResNet-34GF), the proposed computation redistribution and sample redistribution improve the AP by 1.27% and 0.92%, indicating the superiority of SCRFD over manual designs. Compared to the single path one-shot NAS method, SCRFD-34GF outperforms BFBox by 15.81%, while using a more compact model size. As the search space of BFBox is complex, there exists a large number of low-performance architectures. In addition, BFBox only searches the backbone and neck without considering the optimization on the head. For multi-scale testing, SCRFD-34GF slightly outperforms TinaFace but consumes much less computation. For the low-compute regimes in Tab. 3, SCRFD-0.5GF significantly outperforms RetinaFace-MobileNet0.25 by 21.19% on the hard AP, while consuming only 63.34% computation and 45.57% inference time under the VGA resolution. When the evaluation is conducted on the original image, SCRFD-0.5GF surpasses LFFD by 3.6% on the hard AP, while consuming only 5.5% flops.
5 CONCLUSIONS
In this work, we present a sample and computation redistribution paradigm for efficient face detection. Our results show significantly improved accuracy and efficiency trade-off by the proposed SCRFD across a wide range of compute regimes, when compared to the current state-of-the-art.
Acknowledgements. We would like to thank Hui Ni from Tencent for preparing the mobile demo of SCRFD https://github.com/nihui/ncnn-android-scrfd. Stefanos Zafeiriou acknowledges support from the EPSRC Fellowship DEFORM (EP/S010203/1), FACER2VM (EP/N007743/1) and a Google Faculty Fellowship.
A APPENDIX
A.1 TINAFACE REVISITED
Based on RetinaNet (Lin et al., 2017a), TinaFace (Zhu et al., 2020) employs ResNet-50 (He et al., 2016) as backbone, and Feature Pyramid Network (FPN) (Lin et al., 2017a) as neck to construct the feature extractor. For the head design, TinaFace first uses a feature enhancement module on each feature pyramid to learn surrounding context through different receptive fields in the inception block (Szegedy et al., 2015). Then, four consecutive 3 × 3 convolutional layers are appended on each feature pyramid. Focal loss (Lin et al., 2017b) is used for the classification branch, DIoU loss (Zheng et al., 2020) for the box regression branch and cross-entropy loss for the IoU prediction branch.
To detect tiny faces, TinaFace tiles anchors of three different scales, over each level of the FPN (i.e. {24/3, 25/3, 26/3} × {4, 8, 16, 32, 64, 128}, from level P2 to P7). The aspect ratio is set as 1.3. During training, square patches are cropped from the original image and resized to 640× 640, using a scaling factor randomly sampled from [0.3, 0.45, 0.6, 0.8, 1.0], multiplied by the length of the original image’s short edge. During testing, TinaFace employs single scale testing, when the short and long edges of the image do not surpass [1100, 1650]. Otherwise, it employs with short edge scaling at [500, 800, 1100, 1400, 1700], shift with directions [(0, 0), (0, 1), (1, 0), (1, 1)] and horizontal flip.
As shown in Fig. 6(a) and Tab. 4, we compare the performance of TinaFace under different testing scales. For the multi-scale testing, TinaFace achieves an impressive AP of 93.4%, which is the current best performance on the WIDER FACE leader-board. For large single-scale testing (1650), the AP slightly drops at 93.0% but the computation significantly decreases to 1021.82 GFlops. On the original scale (1024), the performance of TinaFace is still very high, obtaining an AP of 91.4% with 508.47 GFlops. Moreover, when the testing scale decreases to VGA level (640), the AP significantly reduces to 81.4%, with the computation further decreasing at 172.95 GFlops.
In Fig. 6(b), we illustrate the computation distribution of TinaFace on the backbone, neck and head components with a testing scale of 640. From the view of different scales of the feature pyramid, the majority of the computational costs (about 68%) are from stride 4, as the resolution of feature map is quite large (120×160). From the view of the different components of the face detector, most of the computational costs (about 79%) are from the head, since the backbone structure is directly borrowed from the ImageNet classification task (Deng et al., 2009), without any modification.
Even though TinaFace achieves state-of-the-art performance on tiny face detection, the heavy computational cost renders it unsuitable for real-time applications.
A.2 DETAILS OF EVOLUTIONARY BASELINE
To compare the proposed SCRFD with the other network search methods in Tab. 1, we design the evolutionary baseline (Real et al., 2019) as follows:
1. A population of networks P are randomly initialized. We set |P| = 50. 2. Each network architecture from P is trained on the WIDER FACE training data and then
the APs on the WIDER FACE validation dataset are tested.
3. Architectures with top performance are selected from P as parents P . To generate child networks C, we employ the mutation and crossover policies. Here, we set |P| = 10 and |C| = 50.
4. Each network architecture from C is trained on the WIDER FACE training data and then the APs on the WIDER FACE validation dataset are calculated.
5. The worst 50 individuals from the populations of P ∪C are dropped and then we get the new evolutionary population P.
6. We repeat steps 3, 4 and 5 for 20 times, resulting in 1000 network architectures as well as their validation APs. The architecture with the highest AP is selected as the final result.
A.3 ALGORITHM OF COMPUTATION REDISTRIBUTION
In Algorithm 1, we show the details of the proposed two-step computation redistribution method.
Algorithm 1: Search algorithm for computation redistribution Input: Constraint of computation cost Y (in GFlops); Number of random network
architectures N ; Dataset for training Dtrain and validation Dval; Evaluation metric AP . Output: Best architecture A∗ Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({di, wi}) ; /* di and wi denote block number and channel number, i = 2, 3, 4, 5. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR1 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({CR1, n,m, h}) ; /* n,m, and h denote channel in neck, block and channel in head. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR2 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Output: Best architecture A∗ = choose top1(CR2,A).
A.4 DETAILED NETWORK CONFIGURATIONS
In Tab. 5, we give the detailed network configurations for baselines and the proposed CRFD across different compute regimes.
A.5 STATISTICS AFTER SAMPLE REDISTRIBUTION
As illustrated in Fig. 7(a), there are more faces below the scale of 32 after the proposed automatic scale augmentation strategy is used. Moreover, even though there will be more extremely tiny faces (e.g. < 4 × 4) under the proposed scale augmentation, these ground-truth faces will be neglected during training due to unsuccessful anchor matching. As shown in Fig. 7(b), positive anchors within one epoch significantly increase at the scale of 16 and 32. With more training samples redistributed to the small scale, the branch to detect tiny faces can be trained more adequately.
A.6 DATASETS
WIDER FACE The WIDER FACE dataset (Yang et al., 2016) consists of 32, 203 images and 393, 703 face bounding boxes with a high degree of variability in scale, pose, expression, occlusion and illumination. The WIDER FACE dataset is split into training (40%), validation (10%) and testing (50%) subsets by randomly sampling from 61 scene categories. Based on the detection rate of EdgeBox (Zitnick & Dollár, 2014), three levels of difficulty (i.e. Easy, Medium and Hard) are defined by incrementally incorporating hard samples.
AFW The AFW dataset (Zhu & Ramanan, 2012) contains 205 high-resolution images with 473 faces (Mathias et al., 2014) collected from Flickr. Images in this dataset contain cluttered backgrounds with large variations in viewpoint.
PASCAL The PASCAL face dataset (Mathias et al., 2014) is collected from the PASCAL 2012 person layout subset, includes 1, 335 labeled faces in 851 images with large facial appearance and pose variations (e.g. large in-plane rotation).
FDDB The FDDB dataset (Jain & Learned-Miller, 2010) is a collection of labeled faces from Faces in the Wild dataset. It contains a total of 5, 171 face annotations on 2, 845 images. The dataset incorporates a range of challenges, including difficult pose angles, out-of-focus faces and low-resolution.
A.7 CROSS DATASET EVALUATION AND VISUALIZATION
Besides the evaluation on the WIDER FACE (Yang et al., 2016) data set, we also conduct cross dataset evaluation and test the proposed SCRFD models on AFW (Zhu & Ramanan, 2012), PASCAL (Mathias et al., 2014) and FDDB (Jain & Learned-Miller, 2010), under the VGA resolution. As shown in Fig 8, SCRFD-34GF achieves 99.945% AP on AFW, 99.597% AP on PASCAL, and 99.25% on FDDB, surpassing BFBox (Liu & Tang, 2020) and HAMBox (Liu et al., 2020). Even though the face scale distributions on these three datasets are different from WIDER FACE, the proposed SCRFD-34GF still obtains state-of-the-art performance across different datasets, showing impressive robustness of the proposed computation and sample redistribution approaches. In addition, SCRFD-2.5GF also obtains impressive performance on different datasets with much lower computation cost (99.821% AP on AFW, 98.911% AP on PASCAL, and 99.02% AP on FDDB).
Fig. 9 shows qualitative results generated by SCRFD-2.5GF. As can be seen, our face detector works very well in both indoor and outdoor crowded scenes under different conditions (e.g. appearance variations from pose, occlusion and illumination). The impressive performance across a wide range of scales indicate that SCRFD-2.5GF has a very high recall and can detect faces accurately even without large scale testing. | 1. What is the focus of the paper in terms of computer vision?
2. What are the novel contributions of the proposed face detection algorithm?
3. How effective are the proposed methods according to the ablation study?
4. In what ways does the paper's approach differ from other state-of-the-art methods?
5. Are there any limitations regarding the applicability of the proposed method? | Summary Of The Paper
Review | Summary Of The Paper
The authors proposed face detection algorithm based on the optimized network architecture and data sampling strategy. Their novelties are two-fold: one is Computation Redistribution (CR) which optimally reallocates the computation between the backbone, neck and head of the model, and another is Sample Redistribution (SR) which automatically redistributes more training samples for shallow stages. The ablation study showed that both of proposed methods are effective and the comparative study showed that their whole pipeline achieved the highest accuracy among state-of-the-art methods on the public WIDER FACE dataset.
Review
strengths:
Both of the proposed CR and SR are effective and the ablation study showed that they definitely improved the detection accuracy.
The proposed method is effective especially for detection of small faces and the experimental results support this (the accuracy improves largely on WIDER FACE Hard which includes many small faces).
weaknesses:
The proposed methods are effective for detection of objects with wide range of scales, which might have limited applications. |
ICLR | Title
Sample and Computation Redistribution for Efficient Face Detection
Abstract
Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these observations, we introduce two simple but effective methods: (1) Computation Redistribution (CR), which reallocates the computation between the backbone, neck and head of the model; and (2) Sample Redistribution (SR), which augments training samples for the most needed stages. The proposed Sample and Computation Redistribution for Face Detection (SCRFD) is implemented by a random search in a meticulously designed search space. Extensive experiments conducted on WIDER FACE demonstrate the state-of-the-art accuracy-efficiency trade-off for the proposed SCRFD family across a wide range of compute regimes. In particular, SCRFD-34GF outperforms the best competitor, TinaFace, by 4.78% (AP at hard set) while being more than 3× faster on GPUs with VGA-resolution images. Code is available at: https://github.com/deepinsight/insightface/ tree/master/detection/scrfd.
1 INTRODUCTION
Face detection is a long-standing problem in computer vision with many applications, such as face alignment (Bulat & Tzimiropoulos, 2017; Deng et al., 2019b), face reconstruction (Feng et al., 2018; Gecer et al., 2021), face attribute analysis (Zhang et al., 2018; Pan et al., 2018), and face recognition (Schroff et al., 2015; Deng et al., 2019a; 2020a). Following the pioneering work of (Viola & Jones, 2004), numerous face detection algorithms have been designed. Among them, the single-shot anchor-based approaches (Najibi et al., 2017; Zhang et al., 2017b; Tang et al., 2018; Li et al., 2019; Ming et al., 2019; Deng et al., 2020b; Liu et al., 2020; Zhu et al., 2020) have recently demonstrated very promising performance. In particular, on the most challenging face detection dataset, WIDER FACE (Yang et al., 2016), the average precision (AP) on its hard validation set has been boosted to 93.4% by TinaFace (Zhu et al., 2020).
Even though TinaFace (Zhu et al., 2020) achieves impressive results on unconstrained face detection, it employs large-scale (e.g. 1, 650 pixels) testing, which consumes huge amounts of computational resources. In addition, TinaFace design is based on a generic object detector (i.e. RetinaNet (Lin et al., 2017b)), directly taking the classification network as the backbone, tiling dense anchors on the multi-scale feature maps (i.e. P2 to P7 of neck), and adopting heavy head designs. Without considering the prior of faces, the network design of TinaFace is thus redundant and sub-optimal.
One approach of optimizing such networks’ performance is computation redistribution. Since directly taking the backbone of the classification network for object detection is sub-optimal, the recent CR-NAS (Liang et al., 2020) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field (ERF), leading to higher detection performance. In BFbox (Liu & Tang, 2020), a face-appropriate search space is designed, based on the observation of scale distribution gap between general object detection and face detection. In ASFD (Zhang et al.,
∗denotes equal contribution and corresponding author. InsightFace is a nonprofit Github project for 2D and 3D face analysis.
2020a), a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. Even though (Liu & Tang, 2020; Zhang et al., 2020a) have realized the limitation of directly applying general backbone, neck and head settings to face detection, CR-NAS (Liang et al., 2020) only focuses the optimization on backbone, BFbox (Liu & Tang, 2020) neglects the optimization of head, and ASFD (Zhang et al., 2020a) only explores the best design for neck.
Another optimization approach, is the sample redistribution across different scales. Due to the extremely large scale variance of faces in real-world scenarios, different scale augmentation strategies are employed to introduce scale adaptation into the face detector. The most widely used scale augmentation approaches include random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). Nevertheless, the scale augmentation parameters in these methods are manually designed for all different network structures. Therefore, traditional multi-scale training in face detection is also tedious and sub-optimal.
Since VGA resolution (640 × 480) is widely used for efficient face detection on numerous mobile phones and digital cameras, we focus on efficient face detection from low-resolution images in this paper. In Fig 1(a), we give the cumulative face scale distribution on the WIDER FACE validation dataset. Under the VGA resolution, most of the faces (78.93%) in WIDER FACE are smaller than 32×32 pixels. Under this specific scale distribution, both network structure and scale augmentation need to be optimized.
In this work, we present a meticulously designed methodology of search space optimization, that addresses both the redistribution between the backbone, neck and head, and the sample redistribution between the most needed scales. As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency, we first discover principles of computation distribution under different flop regimes. Inspired by (Radosavovic et al., 2020), we control the degrees of freedom and reduce the search space. More specifically, we randomly sample model architectures with different configurations on backbone (stem and four stages), neck and head. Based on the statistics of these models, we compute the empirical bootstrap (Efron & Tibshirani, 1994) and estimate the likely range in which the best models fall. To further decrease the complexity of the search space, we divide the computation ratio estimation for backbone and the whole detector into two steps. To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by discrete scales and binary probabilities. In experiments, the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes, even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig. 1(b).
To sum up, this paper makes following contributions:
• We have proposed a simplified search space, as well as a two-step search strategy for computation redistribution across different components (backbone, neck and head) of a face detector. The proposed computation redistribution method can easily boost detection performance through random search. • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation, which automatically redistributes more training samples for shallow stages, enhancing the detection performance on small faces. • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes.
2 RELATED WORK
Face Detection. To deal with extreme variations (e.g. scale, pose, illumination and occlusion) in face detection (Yang et al., 2016), most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement. SSH (Najibi et al., 2017) builds detection modules on different feature maps with a rich receptive field. S3FD (Zhang et al., 2017b) introduces an anchor compensation strategy by offsetting anchors for outer faces. PyramidBox (Tang et al., 2018) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data. DSFD (Li et al., 2019) introduces small faces supervision signals on the backbone, which implicitly boosts the performance of pyramid features. Group sampling (Ming et al., 2019) emphasizes the importance of the ratio for matched and unmatched anchors. RetinaFace (Deng et al., 2020b) employs deform-able context modules and additional landmark annotations to improve the performance of face detection. HAMBox (Liu et al., 2020) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces. BFbox (Liu & Tang, 2020) employs a single-path one-shot search method (Guo et al., 2019) to jointly optimize the backbone and neck for face detector. ASFD (Zhang et al., 2020a) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. All these methods are either designed by expert experience or partially optimized on backbone, neck and head. By contrast, we search for computation redistribution across different components (backbone, neck and head) of a face detector across a wide range of compute regimes.
Neural Architecture Search. Given a fixed search space of possible networks, Neural Architecture Search (NAS) automatically finds a good model within the search space. DetNAS (Chen et al., 2019b) adopts the evolution algorithm for the backbone search to boost object detection on COCO (Lin et al., 2014). By contrast, CR-NAS (Liang et al., 2020) reallocates the computation across different stages within the backbone to improve object detection. NAS-FPN (Ghiasi et al., 2019) uses reinforcement learning to search the proper FPN for general object detection. As there is an obvious distribution gap between COCO (Lin et al., 2014) and WIDER FACE (Yang et al., 2016), the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone, neck and head can be optimized to enhance the performance of face detection. Inspired by RegNet (Radosavovic et al., 2020), we optimize the computation distribution on backbone, neck and head based on the statistics from a group of random sampled models. We successfully reduce the search space and find the stable computation distribution under a particular complex regime, which significantly improves the model’s performance.
3 METHODOLOGY
To efficiently and accurately detect small faces from low-resolution images (e.g. VGA 640 × 480), we propose two methodologies that, when combined, outperform the state-of-the-art. In Sec. 3.1, we explore the computation redistribution across different stages of backbone, as well as different components (i.e. backbone, neck and head) of the whole detector, given a pre-defined computation budget. Then, in Sec. 3.2, we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations.
3.1 COMPUTATION REDISTRIBUTION
As illustrated in Fig. 2, we apply our search method on a network consisting of (1) RetinaNet (Lin et al., 2017a), with ResNet (He et al., 2016) as the backbone, (2) Path Aggregation Feature Pyramid Network (PAFPN) (Liu et al., 2018) as the neck, and (3) stacked 3 × 3 convolutional layers for the head. Despite the generally simple structure, the total number of possible network configurations of the search space becomes unwieldy. Therefore, we attempt to simplify the tremendous search space and arrive at a low-dimensional design space, consisting of simple and effective networks.
3.1.1 SEARCH SPACE DESIGN
Inspired by RegNet (Radosavovic et al., 2020), we explore the structures of face detectors, assuming fixed standard network blocks (i.e., basic residual or bottleneck blocks with a fixed bottleneck ratio of 4). In our case, the structure of a face detector includes:
• the backbone stem, three 3× 3 convolutional layers with w1 output channels (He et al., 2019a). • the backbone body, four stages (i.e. C2, C3, C4 and C5) operating at progressively reduced
resolution, with each stage consisting of a sequence of identical blocks. For each stage i, the degrees of freedom include the number of blocks di (i.e. network depth) and the block width wi (i.e. number of channels). • the neck, a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels (Liu et al., 2018).
• the head, with hi channels of m blocks to predict face scores and regress face boxes. The search space can be initially designed as follows. As the channel number of the stem is equal to the block width of the first residual block in C2, the degree of freedom of the stem w1 can be merged into w2. In addition, we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads. Therefore, we reduce the degrees of freedom to three within our neck and head design: (1) output channel number n for neck, (2) output channel number h for head, and (3) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256, h ≤ 256, and m ≤ 6 (both n and h are divisible by 8). The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters: the number of blocks di and block width wi. Following RegNet (Radosavovic et al., 2020), we perform uniform sampling of di ≤ 24 and wi ≤ 512 (wi is divisible by 8). As state-ofthe-art backbones have increasing widths (Radosavovic et al., 2020), we also constrain the search space, according to the principle of wi+1 ≥ wi.
3.1.2 ESTIMATION METRIC
Based on above simplifications, our search space becomes more compact and efficient. We repeat the random sampling in our search space until we obtain 320 models in our target complexity regime, and train each model on the WIDER FACE (Yang et al., 2016) training set for 80 epochs. Then, we test the Average Precision (AP) of each model on the validation set. Based on these 320 pairs of model statistics (xi, APi), where xi is the computation ratio of a particular component and APi the corresponding performance, we can compute the empirical bootstrap (Efron & Tibshirani, 1994) to estimate the likely range in which the best models fall. More specifically, we repeatedly sample with replacement 25% of the pairs for 103 times and select the pair with maximum AP in each sampling. Afterwards, we compute the 95% confidence interval for the maximum value and the median gives the most likely best computation ratio.
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Backbone)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(a) Backbone ∼ (67%, 88%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Neck)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(b) Neck ∼ (1%, 7%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Head)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(c) Head ∼ (10%, 26%)
3.1.3 TWO-STEP SEARCH
To further decrease the complexity of search space, we divide our network structure search into the following two steps: (1) CRFD1: search the computational distribution for the backbone only, while fixing the settings of the neck and head to the default configuration, and (2) CRFD2: search the computational distribution over the whole face detector (i.e. backbone, neck and head), with the computational distribution within the backbone, following the optimized CRFD1. By optimizing in both manners, we achieve the final optimized network design for the computation-constrained face detection. In the example below, we constrain CRFD to 2.5 GFlops (CRFD-2.5GF), in order to illustrate our two-step searching strategy.
Computation redistribution on backbone. For CRFD1-2.5GF, we fix the output channel of the neck at 32 and use two stacked 3 × 3 convolutions with 96 output channels. As the neck and head configurations do not change in the whole search process of CRFD1, we can easily find the best computation distribution of the backbone. As described in Fig. 3, we show the distribution of 320 model APs (on the WIDER FACE hard validation set) versus the computation ratio over each component (i.e. stem, C2, C3, C4 and C5) of backbone. After applying an empirical bootstrap (Efron & Tibshirani, 1994), a clear trend emerges, showing that the backbone computation is reallocated to the shallow stages (i.e. C2 and C3).
Computation redistribution on backbone, neck and head. In this step, we only keep the randomly generated network configurations whose backbone settings follow the computation distribution from CRFD1 as shown in Fig. 3. In this case, there are another three degrees of freedom (i.e. output channel number n for neck, output channel number h for head, and the number m of 3 × 3 convolutional layers in head). We repeat the random sampling in our search space, until we obtain 320 qualifying models in our target complexity regime (i.e. 2.5 GFlops). As evident in Fig. 4, most
of the computation is allocated in the backbone, with the head following and the neck having the lowest computation ratio. Fig. 4(d) also depicts the comparison between the hand-crafted model architecture and the computation redistributed network, under the constraint of 2.5 GFlops.
3.2 SAMPLE REDISTRIBUTION
As face detection features large scale variations (from several pixels to thousand pixels), there exist two widely used scale augmentation strategies, random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). In the random square crop strategy, square patches are cropped from the original image with a random size between [0.3, 1] of the short edge and then resized into 640 × 640 to generate larger training faces. By contrast, data anchor sampling strategy aims to generate more small scale faces by down-sampling the original image, bringing a large amount of padded area. Even though both random square crop and data anchor sampling can achieve promising results on the WIDER FACE dataset, the scale augmentation parameters are manually designed for all different network structures. Therefore, the training sample distribution on the feature pyramids can be sub-optimal for a particular network structure.
To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by the scale si and probability pi. The scale si represents the zooming ratio, sampled from a discrete set S = {smin, smin + 0.1, · · · , smax − 0.1, smax}. For a particular training image in each iteration, square patches are cropped from the original images with a zooming ratio si of the short edge of the original images. If the square patch is larger than the original image, average RGB values will fill the missing pixels. To shrink the scale search space, we employ a binary probability set pi ∈ {0, 1}. Under this setting, the probability-based scale search is simplified into a discrete scale sampling from a fixed set. As the interval of the discrete scale set is only 0.1, adjacent scales will have the probability of 1.0 to approximate a higher probability around a particular scale. In this paper, we employ random search under the estimation metric of AP on WIDER FACE to construct the best scale augmentation set. More specifically, we set smin = 0.1 and smax = 3.0. Then, we randomly select 8 to 20 discrete scale values to construct each scale augmentation set and train CRFD models under 320 different scale augmentation sets. Finally, the scale augmentation set with the highest detection performance is selected for optimized scale augmentation.
4 EXPERIMENTS
4.1 IMPLEMENTATION DETAILS
Training. For the scale augmentation, square patches are cropped from the original images with a random size from a pre-defined scale set, and then these patches are resized to 640 × 640 for training. Besides scale augmentation, the training data are also augmented by color distortion and random horizontal flipping, with a probability of 0.5. For the anchor setting, we tile anchors of {16, 32}, {64, 128}, and {256, 512} on the feature maps of stride 8, 16, and 32, respectively. The anchor ratio is set as 1.0. In this paper, we employ Adaptive Training Sample Selection (ATSS)
(Zhang et al., 2020b) for positive anchor matching. In the detection head, weight sharing and Group Normalization (Wu & He, 2018) are used. The losses of classification and regression branches are Generalized Focal Loss (GFL) (Li et al., 2020) and DIoU loss (Zheng et al., 2020), respectively.
Our experiments are implemented in PyTorch, based on the open-source MMDetection (Chen et al., 2019a). We adopt the SGD optimizer (momentum 0.9, weight decay 5e-4) with a batch size of 8×8 and train on eight Tesla V100. The learning rate is linearly warmed up to 0.015 within the first 3 epochs. During network search, the learning rate is multiplied by 0.1 at the 55-th, and 68-th epochs. The learning process terminates on the 80-th epoch. For training of both baselines and searched configurations, the learning rate decays by a factor of 10 at the 440-th and 544-th epochs, and the learning process terminates at the 640-th epoch. All the models are trained from scratch without any pre-training.
Testing. For fair comparisons with other methods, we employ three testing strategies, including single-scale VGA resolution (640 × 480), single-scale original resolution, and multi-scale testing. The results of DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), TinaFace (Zhu et al., 2020), Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b) are reported by testing the released models, while the HAMBox (Liu et al., 2020) and BFBox (Liu & Tang, 2020) models are shared from the author.
4.2 ABLATION STUDY
In Tab. 1, we present the performance of models on the WIDER FACE dataset by gradually including the proposed computation and sample redistribution methods. Our manually-designed baseline model, ResNet-2.5GF, gets APs of 91.87%, 89.49%, and 67.32% under three validation scenarios.
Computation redistribution. After separately employing the proposed computation redistribution on the backbone and the whole detector, the AP on the hard set improves to 69.78% and 70.98%. This indicates that (1) the network structure directly inherited from the classification task is suboptimal for the face detection task, and (2) joint computation reallocation on the backbone, neck and head outperforms computation optimization applied only on the backbone. Furthermore, the proposed two-step computation redistribution strategy achieves AP of 71.37%, surpassing one-step computation reallocation on the whole detector by 0.39%. As we shrink the whole search space by the proposed two-step strategy and our random model sampling number is fixed at 320, the twostep method is possible to find better network configurations from the large search space. In Tab. 1, we also compare our method with the single path one-shot NAS method (BFBox (Liu & Tang, 2020)) and the evolutionary search method (Appendix A.2), under the constraint of 2.5 GFlops. BFBox aims to design a face-appropriate search space by combing some excellent block designs, such as bottleneck block, densenet block and shufflenet block. However, such a combination generates a complex and redundant search space, which inevitably involves a vast body of low-performance candidate architectures. The evolutionary approach iteratively adopts mutations and crossover to gradually generate better architecture candidates from the randomly initialized search space, which also contains a large number of under-performing architectures. By contrast, CRFD-2.5GF utilizes an empirical bootstrap to estimate the optimized computation distribution of the best-performing architecture candidates, which directly eliminates the low-quality architectures from the initialized search
space. Therefore, CRFD-2.5GF can obviously outperform the BFBox and evolutionary method by 1.96% and 1.75% on the hard track.
Sample redistribution. For scale augmentation, we first manually extend the default scale set {0.3, 0.45, 0.6, 0.8, 1.0} by adding larger scales {1.2, 1.4, 1.6, 1.8, 2.0}. By adding this hand-crafted sample redistribution, the hard set APs significantly increase by 7.15% for the baseline and 6.5% for the proposed CRFD, indicating the benefit from allocating more training samples on the feature map of stride 8. By employing the optimized scale augmentation from searching, the hard set AP further increases by 0.48% for the proposed CRFD. For SCRFD-2.5GF, the best scale augmentation set searched is {0.5, 0.7, 0.8, 1.0, 1.1, 1.2, 1.4, 1.5, 1.8, 2.0, 2.3, 2.6}. As we can see from these discrete scales, faces around the original scale are preferred for training, along with an appropriate probability and ratio of zooming-out.
4.3 COMPUTATION REDISTRIBUTION ACROSS DIFFERENT COMPUTE REGIMES
Besides the complexity constraint of 2.5 GFlops, we also utilize the same two-step computation redistribution method to explore the network structure optimization for higher compute regimes (e.g. 10 GFlops and 34 GFlops) and lower compute regimes (e.g. 0.5 GFlops and 1.0 GFlops). In Fig. 5, we show the computation redistribution and the optimized network structures under different computation constraints.
Our final architectures have almost the same flops as the baseline networks. From these redistribution results, we can draw the following conclusions: (1) more computation is allocated in the backbone and the computation on the neck and head is compressed; (2) more capacity is reallocated in shallow stages due to the specific scale distribution on WIDER FACE; (3) for the high compute regime (e.g. 34 GFlops), the explored structure utilizes the bottleneck residual block and we observe significant depth scaling, instead of width scaling in shallow stages. Scaling the width is subject to over-fitting due to the larger increase in parameters (Bello et al., 2021). By contrast, scaling the depth, especially in the earlier layers, introduces fewer parameters compared to scaling the width; (4) for the mobile regime (0.5 GFlops), allocating the limited capacity in the deep stage (e.g. C5) for the discriminative features captured in the deep stage, can benefit the shallow small face detection by the top-down neck pathway.
4.4 ACCURACY AND EFFICIENCY COMPARISONS ON WIDER FACE
As shown in Tab. 2 and Tab. 3, we compared the proposed SCRFD with other state-of-the-art face detection algorithms (e.g. DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), BFBox (Liu & Tang, 2020), HAMBox (Liu et al., 2020) and TinaFace (Zhu et al., 2020)) as well as light-weight face methods (e.g. Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b)). Overall, all of the proposed SCRFD models provide considerable improvements compared to the hand-crafted baseline models (e.g. ResNet-2.5GF and MobileNet-0.5GF), by optimizing the network structure as well as the scale augmentation, across a wide range of compute regimes.
When we fix the testing scale at 640 as in Tab. 2, the proposed SCRFD-34GF outperforms all these state-of-the-art methods on the three subsets, especially for the hard track, which contains a large number of tiny faces. More specifically, SCRFD-34GF surpasses TinaFace by 4.78% while being more than 3× faster on GPUs. In addition, the computation cost of SCRFD-34GF is only around 20% of TinaFace. As SCRFD-34GF scales the depth in the earlier layers, it also introduces fewer parameters, resulting in a much smaller model size (9.80M ). Compared to the hand-crafted baseline (ResNet-34GF), the proposed computation redistribution and sample redistribution improve the AP by 1.27% and 0.92%, indicating the superiority of SCRFD over manual designs. Compared to the single path one-shot NAS method, SCRFD-34GF outperforms BFBox by 15.81%, while using a more compact model size. As the search space of BFBox is complex, there exists a large number of low-performance architectures. In addition, BFBox only searches the backbone and neck without considering the optimization on the head. For multi-scale testing, SCRFD-34GF slightly outperforms TinaFace but consumes much less computation. For the low-compute regimes in Tab. 3, SCRFD-0.5GF significantly outperforms RetinaFace-MobileNet0.25 by 21.19% on the hard AP, while consuming only 63.34% computation and 45.57% inference time under the VGA resolution. When the evaluation is conducted on the original image, SCRFD-0.5GF surpasses LFFD by 3.6% on the hard AP, while consuming only 5.5% flops.
5 CONCLUSIONS
In this work, we present a sample and computation redistribution paradigm for efficient face detection. Our results show significantly improved accuracy and efficiency trade-off by the proposed SCRFD across a wide range of compute regimes, when compared to the current state-of-the-art.
Acknowledgements. We would like to thank Hui Ni from Tencent for preparing the mobile demo of SCRFD https://github.com/nihui/ncnn-android-scrfd. Stefanos Zafeiriou acknowledges support from the EPSRC Fellowship DEFORM (EP/S010203/1), FACER2VM (EP/N007743/1) and a Google Faculty Fellowship.
A APPENDIX
A.1 TINAFACE REVISITED
Based on RetinaNet (Lin et al., 2017a), TinaFace (Zhu et al., 2020) employs ResNet-50 (He et al., 2016) as backbone, and Feature Pyramid Network (FPN) (Lin et al., 2017a) as neck to construct the feature extractor. For the head design, TinaFace first uses a feature enhancement module on each feature pyramid to learn surrounding context through different receptive fields in the inception block (Szegedy et al., 2015). Then, four consecutive 3 × 3 convolutional layers are appended on each feature pyramid. Focal loss (Lin et al., 2017b) is used for the classification branch, DIoU loss (Zheng et al., 2020) for the box regression branch and cross-entropy loss for the IoU prediction branch.
To detect tiny faces, TinaFace tiles anchors of three different scales, over each level of the FPN (i.e. {24/3, 25/3, 26/3} × {4, 8, 16, 32, 64, 128}, from level P2 to P7). The aspect ratio is set as 1.3. During training, square patches are cropped from the original image and resized to 640× 640, using a scaling factor randomly sampled from [0.3, 0.45, 0.6, 0.8, 1.0], multiplied by the length of the original image’s short edge. During testing, TinaFace employs single scale testing, when the short and long edges of the image do not surpass [1100, 1650]. Otherwise, it employs with short edge scaling at [500, 800, 1100, 1400, 1700], shift with directions [(0, 0), (0, 1), (1, 0), (1, 1)] and horizontal flip.
As shown in Fig. 6(a) and Tab. 4, we compare the performance of TinaFace under different testing scales. For the multi-scale testing, TinaFace achieves an impressive AP of 93.4%, which is the current best performance on the WIDER FACE leader-board. For large single-scale testing (1650), the AP slightly drops at 93.0% but the computation significantly decreases to 1021.82 GFlops. On the original scale (1024), the performance of TinaFace is still very high, obtaining an AP of 91.4% with 508.47 GFlops. Moreover, when the testing scale decreases to VGA level (640), the AP significantly reduces to 81.4%, with the computation further decreasing at 172.95 GFlops.
In Fig. 6(b), we illustrate the computation distribution of TinaFace on the backbone, neck and head components with a testing scale of 640. From the view of different scales of the feature pyramid, the majority of the computational costs (about 68%) are from stride 4, as the resolution of feature map is quite large (120×160). From the view of the different components of the face detector, most of the computational costs (about 79%) are from the head, since the backbone structure is directly borrowed from the ImageNet classification task (Deng et al., 2009), without any modification.
Even though TinaFace achieves state-of-the-art performance on tiny face detection, the heavy computational cost renders it unsuitable for real-time applications.
A.2 DETAILS OF EVOLUTIONARY BASELINE
To compare the proposed SCRFD with the other network search methods in Tab. 1, we design the evolutionary baseline (Real et al., 2019) as follows:
1. A population of networks P are randomly initialized. We set |P| = 50. 2. Each network architecture from P is trained on the WIDER FACE training data and then
the APs on the WIDER FACE validation dataset are tested.
3. Architectures with top performance are selected from P as parents P . To generate child networks C, we employ the mutation and crossover policies. Here, we set |P| = 10 and |C| = 50.
4. Each network architecture from C is trained on the WIDER FACE training data and then the APs on the WIDER FACE validation dataset are calculated.
5. The worst 50 individuals from the populations of P ∪C are dropped and then we get the new evolutionary population P.
6. We repeat steps 3, 4 and 5 for 20 times, resulting in 1000 network architectures as well as their validation APs. The architecture with the highest AP is selected as the final result.
A.3 ALGORITHM OF COMPUTATION REDISTRIBUTION
In Algorithm 1, we show the details of the proposed two-step computation redistribution method.
Algorithm 1: Search algorithm for computation redistribution Input: Constraint of computation cost Y (in GFlops); Number of random network
architectures N ; Dataset for training Dtrain and validation Dval; Evaluation metric AP . Output: Best architecture A∗ Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({di, wi}) ; /* di and wi denote block number and channel number, i = 2, 3, 4, 5. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR1 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({CR1, n,m, h}) ; /* n,m, and h denote channel in neck, block and channel in head. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR2 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Output: Best architecture A∗ = choose top1(CR2,A).
A.4 DETAILED NETWORK CONFIGURATIONS
In Tab. 5, we give the detailed network configurations for baselines and the proposed CRFD across different compute regimes.
A.5 STATISTICS AFTER SAMPLE REDISTRIBUTION
As illustrated in Fig. 7(a), there are more faces below the scale of 32 after the proposed automatic scale augmentation strategy is used. Moreover, even though there will be more extremely tiny faces (e.g. < 4 × 4) under the proposed scale augmentation, these ground-truth faces will be neglected during training due to unsuccessful anchor matching. As shown in Fig. 7(b), positive anchors within one epoch significantly increase at the scale of 16 and 32. With more training samples redistributed to the small scale, the branch to detect tiny faces can be trained more adequately.
A.6 DATASETS
WIDER FACE The WIDER FACE dataset (Yang et al., 2016) consists of 32, 203 images and 393, 703 face bounding boxes with a high degree of variability in scale, pose, expression, occlusion and illumination. The WIDER FACE dataset is split into training (40%), validation (10%) and testing (50%) subsets by randomly sampling from 61 scene categories. Based on the detection rate of EdgeBox (Zitnick & Dollár, 2014), three levels of difficulty (i.e. Easy, Medium and Hard) are defined by incrementally incorporating hard samples.
AFW The AFW dataset (Zhu & Ramanan, 2012) contains 205 high-resolution images with 473 faces (Mathias et al., 2014) collected from Flickr. Images in this dataset contain cluttered backgrounds with large variations in viewpoint.
PASCAL The PASCAL face dataset (Mathias et al., 2014) is collected from the PASCAL 2012 person layout subset, includes 1, 335 labeled faces in 851 images with large facial appearance and pose variations (e.g. large in-plane rotation).
FDDB The FDDB dataset (Jain & Learned-Miller, 2010) is a collection of labeled faces from Faces in the Wild dataset. It contains a total of 5, 171 face annotations on 2, 845 images. The dataset incorporates a range of challenges, including difficult pose angles, out-of-focus faces and low-resolution.
A.7 CROSS DATASET EVALUATION AND VISUALIZATION
Besides the evaluation on the WIDER FACE (Yang et al., 2016) data set, we also conduct cross dataset evaluation and test the proposed SCRFD models on AFW (Zhu & Ramanan, 2012), PASCAL (Mathias et al., 2014) and FDDB (Jain & Learned-Miller, 2010), under the VGA resolution. As shown in Fig 8, SCRFD-34GF achieves 99.945% AP on AFW, 99.597% AP on PASCAL, and 99.25% on FDDB, surpassing BFBox (Liu & Tang, 2020) and HAMBox (Liu et al., 2020). Even though the face scale distributions on these three datasets are different from WIDER FACE, the proposed SCRFD-34GF still obtains state-of-the-art performance across different datasets, showing impressive robustness of the proposed computation and sample redistribution approaches. In addition, SCRFD-2.5GF also obtains impressive performance on different datasets with much lower computation cost (99.821% AP on AFW, 98.911% AP on PASCAL, and 99.02% AP on FDDB).
Fig. 9 shows qualitative results generated by SCRFD-2.5GF. As can be seen, our face detector works very well in both indoor and outdoor crowded scenes under different conditions (e.g. appearance variations from pose, occlusion and illumination). The impressive performance across a wide range of scales indicate that SCRFD-2.5GF has a very high recall and can detect faces accurately even without large scale testing. | 1. What is the focus of the paper regarding neural architecture search?
2. What are the strengths of the proposed approach in terms of performance and dataset?
3. Do you have concerns about the search space optimization methodology?
4. How does the reviewer assess the comparison between the proposed approach and other methods such as evolutionary methods?
5. Is there any question regarding the choice of base network and its potential impact on the results? | Summary Of The Paper
Review | Summary Of The Paper
The paper uses a kind of Neural Architecture Search for finding the best computation redistribution on the base network, based on TinaFace, and the best sample redistribution to find the best network for face detection. It performs very well on the wider-face dataset, outperforming most of the state-of-the-art approaches.
Review
The result on the wider-face dataset is very good, outperforming most of the state-of-the-art approaches.
The paper relied on a "meticulously designed methodology of search space optimization". Architectures are randomly sampled to find the likely range of the best architecture. In this sense, will evolutionary approach do the same trick?
Also, this paper is basically a kind of Neural Architecture Search, and would like to see a comparison with at least evolutionary method.
The choice of TinaFace as base network is straightforward. However, is it possible that after NAS, a close runner-up can be better than the champion? |
ICLR | Title
Sample and Computation Redistribution for Efficient Face Detection
Abstract
Although tremendous strides have been made in uncontrolled face detection, accurate face detection with a low computation cost remains an open challenge. In this paper, we point out that computation distribution and scale augmentation are the keys to detecting small faces from low-resolution images. Motivated by these observations, we introduce two simple but effective methods: (1) Computation Redistribution (CR), which reallocates the computation between the backbone, neck and head of the model; and (2) Sample Redistribution (SR), which augments training samples for the most needed stages. The proposed Sample and Computation Redistribution for Face Detection (SCRFD) is implemented by a random search in a meticulously designed search space. Extensive experiments conducted on WIDER FACE demonstrate the state-of-the-art accuracy-efficiency trade-off for the proposed SCRFD family across a wide range of compute regimes. In particular, SCRFD-34GF outperforms the best competitor, TinaFace, by 4.78% (AP at hard set) while being more than 3× faster on GPUs with VGA-resolution images. Code is available at: https://github.com/deepinsight/insightface/ tree/master/detection/scrfd.
1 INTRODUCTION
Face detection is a long-standing problem in computer vision with many applications, such as face alignment (Bulat & Tzimiropoulos, 2017; Deng et al., 2019b), face reconstruction (Feng et al., 2018; Gecer et al., 2021), face attribute analysis (Zhang et al., 2018; Pan et al., 2018), and face recognition (Schroff et al., 2015; Deng et al., 2019a; 2020a). Following the pioneering work of (Viola & Jones, 2004), numerous face detection algorithms have been designed. Among them, the single-shot anchor-based approaches (Najibi et al., 2017; Zhang et al., 2017b; Tang et al., 2018; Li et al., 2019; Ming et al., 2019; Deng et al., 2020b; Liu et al., 2020; Zhu et al., 2020) have recently demonstrated very promising performance. In particular, on the most challenging face detection dataset, WIDER FACE (Yang et al., 2016), the average precision (AP) on its hard validation set has been boosted to 93.4% by TinaFace (Zhu et al., 2020).
Even though TinaFace (Zhu et al., 2020) achieves impressive results on unconstrained face detection, it employs large-scale (e.g. 1, 650 pixels) testing, which consumes huge amounts of computational resources. In addition, TinaFace design is based on a generic object detector (i.e. RetinaNet (Lin et al., 2017b)), directly taking the classification network as the backbone, tiling dense anchors on the multi-scale feature maps (i.e. P2 to P7 of neck), and adopting heavy head designs. Without considering the prior of faces, the network design of TinaFace is thus redundant and sub-optimal.
One approach of optimizing such networks’ performance is computation redistribution. Since directly taking the backbone of the classification network for object detection is sub-optimal, the recent CR-NAS (Liang et al., 2020) reallocates the computation across different resolutions to obtain a more balanced Effective Receptive Field (ERF), leading to higher detection performance. In BFbox (Liu & Tang, 2020), a face-appropriate search space is designed, based on the observation of scale distribution gap between general object detection and face detection. In ASFD (Zhang et al.,
∗denotes equal contribution and corresponding author. InsightFace is a nonprofit Github project for 2D and 3D face analysis.
2020a), a differential architecture search is employed to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. Even though (Liu & Tang, 2020; Zhang et al., 2020a) have realized the limitation of directly applying general backbone, neck and head settings to face detection, CR-NAS (Liang et al., 2020) only focuses the optimization on backbone, BFbox (Liu & Tang, 2020) neglects the optimization of head, and ASFD (Zhang et al., 2020a) only explores the best design for neck.
Another optimization approach, is the sample redistribution across different scales. Due to the extremely large scale variance of faces in real-world scenarios, different scale augmentation strategies are employed to introduce scale adaptation into the face detector. The most widely used scale augmentation approaches include random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). Nevertheless, the scale augmentation parameters in these methods are manually designed for all different network structures. Therefore, traditional multi-scale training in face detection is also tedious and sub-optimal.
Since VGA resolution (640 × 480) is widely used for efficient face detection on numerous mobile phones and digital cameras, we focus on efficient face detection from low-resolution images in this paper. In Fig 1(a), we give the cumulative face scale distribution on the WIDER FACE validation dataset. Under the VGA resolution, most of the faces (78.93%) in WIDER FACE are smaller than 32×32 pixels. Under this specific scale distribution, both network structure and scale augmentation need to be optimized.
In this work, we present a meticulously designed methodology of search space optimization, that addresses both the redistribution between the backbone, neck and head, and the sample redistribution between the most needed scales. As the structure of a face detector determines the distribution of computation and is the key in determining its accuracy and efficiency, we first discover principles of computation distribution under different flop regimes. Inspired by (Radosavovic et al., 2020), we control the degrees of freedom and reduce the search space. More specifically, we randomly sample model architectures with different configurations on backbone (stem and four stages), neck and head. Based on the statistics of these models, we compute the empirical bootstrap (Efron & Tibshirani, 1994) and estimate the likely range in which the best models fall. To further decrease the complexity of the search space, we divide the computation ratio estimation for backbone and the whole detector into two steps. To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by discrete scales and binary probabilities. In experiments, the proposed computation redistribution and sample redistribution yield significant and consistent improvement on various compute regimes, even surpassing a range of state-of-the-art face detectors by using much fewer flops as shown in Fig. 1(b).
To sum up, this paper makes following contributions:
• We have proposed a simplified search space, as well as a two-step search strategy for computation redistribution across different components (backbone, neck and head) of a face detector. The proposed computation redistribution method can easily boost detection performance through random search. • We have designed a search-able zoom-in and zoom-out space for face-specific scale augmentation, which automatically redistributes more training samples for shallow stages, enhancing the detection performance on small faces. • Extensive experiments conducted on WIDER FACE demonstrate the significantly improved accuracy and efficiency trade-off of the proposed SCRFD across a wide range of compute regimes.
2 RELATED WORK
Face Detection. To deal with extreme variations (e.g. scale, pose, illumination and occlusion) in face detection (Yang et al., 2016), most of the recent single-shot face detectors focus on improving the anchor sampling/matching or feature enhancement. SSH (Najibi et al., 2017) builds detection modules on different feature maps with a rich receptive field. S3FD (Zhang et al., 2017b) introduces an anchor compensation strategy by offsetting anchors for outer faces. PyramidBox (Tang et al., 2018) formulates a data-anchor-sampling strategy to increase the proportion of small faces in the training data. DSFD (Li et al., 2019) introduces small faces supervision signals on the backbone, which implicitly boosts the performance of pyramid features. Group sampling (Ming et al., 2019) emphasizes the importance of the ratio for matched and unmatched anchors. RetinaFace (Deng et al., 2020b) employs deform-able context modules and additional landmark annotations to improve the performance of face detection. HAMBox (Liu et al., 2020) finds that many unmatched anchors in the training phase also have strong localization ability and proposes an online high-quality anchor mining strategy to assign high-quality anchors for outer faces. BFbox (Liu & Tang, 2020) employs a single-path one-shot search method (Guo et al., 2019) to jointly optimize the backbone and neck for face detector. ASFD (Zhang et al., 2020a) explores a differential architecture search to discover optimized feature enhance modules for efficient multi-scale feature fusion and context enhancement. All these methods are either designed by expert experience or partially optimized on backbone, neck and head. By contrast, we search for computation redistribution across different components (backbone, neck and head) of a face detector across a wide range of compute regimes.
Neural Architecture Search. Given a fixed search space of possible networks, Neural Architecture Search (NAS) automatically finds a good model within the search space. DetNAS (Chen et al., 2019b) adopts the evolution algorithm for the backbone search to boost object detection on COCO (Lin et al., 2014). By contrast, CR-NAS (Liang et al., 2020) reallocates the computation across different stages within the backbone to improve object detection. NAS-FPN (Ghiasi et al., 2019) uses reinforcement learning to search the proper FPN for general object detection. As there is an obvious distribution gap between COCO (Lin et al., 2014) and WIDER FACE (Yang et al., 2016), the experience in the above methods is not directly applicable for face detection but gives us an inspiration that the backbone, neck and head can be optimized to enhance the performance of face detection. Inspired by RegNet (Radosavovic et al., 2020), we optimize the computation distribution on backbone, neck and head based on the statistics from a group of random sampled models. We successfully reduce the search space and find the stable computation distribution under a particular complex regime, which significantly improves the model’s performance.
3 METHODOLOGY
To efficiently and accurately detect small faces from low-resolution images (e.g. VGA 640 × 480), we propose two methodologies that, when combined, outperform the state-of-the-art. In Sec. 3.1, we explore the computation redistribution across different stages of backbone, as well as different components (i.e. backbone, neck and head) of the whole detector, given a pre-defined computation budget. Then, in Sec. 3.2, we investigate the redistribution of positive training samples across different scales of feature maps by searching optimized scale augmentations.
3.1 COMPUTATION REDISTRIBUTION
As illustrated in Fig. 2, we apply our search method on a network consisting of (1) RetinaNet (Lin et al., 2017a), with ResNet (He et al., 2016) as the backbone, (2) Path Aggregation Feature Pyramid Network (PAFPN) (Liu et al., 2018) as the neck, and (3) stacked 3 × 3 convolutional layers for the head. Despite the generally simple structure, the total number of possible network configurations of the search space becomes unwieldy. Therefore, we attempt to simplify the tremendous search space and arrive at a low-dimensional design space, consisting of simple and effective networks.
3.1.1 SEARCH SPACE DESIGN
Inspired by RegNet (Radosavovic et al., 2020), we explore the structures of face detectors, assuming fixed standard network blocks (i.e., basic residual or bottleneck blocks with a fixed bottleneck ratio of 4). In our case, the structure of a face detector includes:
• the backbone stem, three 3× 3 convolutional layers with w1 output channels (He et al., 2019a). • the backbone body, four stages (i.e. C2, C3, C4 and C5) operating at progressively reduced
resolution, with each stage consisting of a sequence of identical blocks. For each stage i, the degrees of freedom include the number of blocks di (i.e. network depth) and the block width wi (i.e. number of channels). • the neck, a multi-scale feature aggregation module by a top-down path and a bottom-up path with n channels (Liu et al., 2018).
• the head, with hi channels of m blocks to predict face scores and regress face boxes. The search space can be initially designed as follows. As the channel number of the stem is equal to the block width of the first residual block in C2, the degree of freedom of the stem w1 can be merged into w2. In addition, we employ a shared head design for three-scale of feature maps and fix the channel number for all 3 × 3 convolutional layers within the heads. Therefore, we reduce the degrees of freedom to three within our neck and head design: (1) output channel number n for neck, (2) output channel number h for head, and (3) the number of 3 × 3 convolutional layers m. We perform uniform sampling of n ≤ 256, h ≤ 256, and m ≤ 6 (both n and h are divisible by 8). The backbone search space has 8 degrees of freedom as there are 4 stages and each stage i has 2 parameters: the number of blocks di and block width wi. Following RegNet (Radosavovic et al., 2020), we perform uniform sampling of di ≤ 24 and wi ≤ 512 (wi is divisible by 8). As state-ofthe-art backbones have increasing widths (Radosavovic et al., 2020), we also constrain the search space, according to the principle of wi+1 ≥ wi.
3.1.2 ESTIMATION METRIC
Based on above simplifications, our search space becomes more compact and efficient. We repeat the random sampling in our search space until we obtain 320 models in our target complexity regime, and train each model on the WIDER FACE (Yang et al., 2016) training set for 80 epochs. Then, we test the Average Precision (AP) of each model on the validation set. Based on these 320 pairs of model statistics (xi, APi), where xi is the computation ratio of a particular component and APi the corresponding performance, we can compute the empirical bootstrap (Efron & Tibshirani, 1994) to estimate the likely range in which the best models fall. More specifically, we repeatedly sample with replacement 25% of the pairs for 103 times and select the pair with maximum AP in each sampling. Afterwards, we compute the 95% confidence interval for the maximum value and the median gives the most likely best computation ratio.
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Backbone)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(a) Backbone ∼ (67%, 88%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Neck)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(b) Neck ∼ (1%, 7%)
0 0.2 0.4 0.6 0.8 1 Computation Ratio (Head)
60
62
64
66
68
70
m A
P o
n H
ar d
V al
( %
)
(c) Head ∼ (10%, 26%)
3.1.3 TWO-STEP SEARCH
To further decrease the complexity of search space, we divide our network structure search into the following two steps: (1) CRFD1: search the computational distribution for the backbone only, while fixing the settings of the neck and head to the default configuration, and (2) CRFD2: search the computational distribution over the whole face detector (i.e. backbone, neck and head), with the computational distribution within the backbone, following the optimized CRFD1. By optimizing in both manners, we achieve the final optimized network design for the computation-constrained face detection. In the example below, we constrain CRFD to 2.5 GFlops (CRFD-2.5GF), in order to illustrate our two-step searching strategy.
Computation redistribution on backbone. For CRFD1-2.5GF, we fix the output channel of the neck at 32 and use two stacked 3 × 3 convolutions with 96 output channels. As the neck and head configurations do not change in the whole search process of CRFD1, we can easily find the best computation distribution of the backbone. As described in Fig. 3, we show the distribution of 320 model APs (on the WIDER FACE hard validation set) versus the computation ratio over each component (i.e. stem, C2, C3, C4 and C5) of backbone. After applying an empirical bootstrap (Efron & Tibshirani, 1994), a clear trend emerges, showing that the backbone computation is reallocated to the shallow stages (i.e. C2 and C3).
Computation redistribution on backbone, neck and head. In this step, we only keep the randomly generated network configurations whose backbone settings follow the computation distribution from CRFD1 as shown in Fig. 3. In this case, there are another three degrees of freedom (i.e. output channel number n for neck, output channel number h for head, and the number m of 3 × 3 convolutional layers in head). We repeat the random sampling in our search space, until we obtain 320 qualifying models in our target complexity regime (i.e. 2.5 GFlops). As evident in Fig. 4, most
of the computation is allocated in the backbone, with the head following and the neck having the lowest computation ratio. Fig. 4(d) also depicts the comparison between the hand-crafted model architecture and the computation redistributed network, under the constraint of 2.5 GFlops.
3.2 SAMPLE REDISTRIBUTION
As face detection features large scale variations (from several pixels to thousand pixels), there exist two widely used scale augmentation strategies, random square crop (Zhang et al., 2017b; Deng et al., 2020b; Zhu et al., 2020) and data anchor sampling (Tang et al., 2018). In the random square crop strategy, square patches are cropped from the original image with a random size between [0.3, 1] of the short edge and then resized into 640 × 640 to generate larger training faces. By contrast, data anchor sampling strategy aims to generate more small scale faces by down-sampling the original image, bringing a large amount of padded area. Even though both random square crop and data anchor sampling can achieve promising results on the WIDER FACE dataset, the scale augmentation parameters are manually designed for all different network structures. Therefore, the training sample distribution on the feature pyramids can be sub-optimal for a particular network structure.
To handle extreme scale variations in face detection, we also design a search-able zoom-in and zoom-out space, specified by the scale si and probability pi. The scale si represents the zooming ratio, sampled from a discrete set S = {smin, smin + 0.1, · · · , smax − 0.1, smax}. For a particular training image in each iteration, square patches are cropped from the original images with a zooming ratio si of the short edge of the original images. If the square patch is larger than the original image, average RGB values will fill the missing pixels. To shrink the scale search space, we employ a binary probability set pi ∈ {0, 1}. Under this setting, the probability-based scale search is simplified into a discrete scale sampling from a fixed set. As the interval of the discrete scale set is only 0.1, adjacent scales will have the probability of 1.0 to approximate a higher probability around a particular scale. In this paper, we employ random search under the estimation metric of AP on WIDER FACE to construct the best scale augmentation set. More specifically, we set smin = 0.1 and smax = 3.0. Then, we randomly select 8 to 20 discrete scale values to construct each scale augmentation set and train CRFD models under 320 different scale augmentation sets. Finally, the scale augmentation set with the highest detection performance is selected for optimized scale augmentation.
4 EXPERIMENTS
4.1 IMPLEMENTATION DETAILS
Training. For the scale augmentation, square patches are cropped from the original images with a random size from a pre-defined scale set, and then these patches are resized to 640 × 640 for training. Besides scale augmentation, the training data are also augmented by color distortion and random horizontal flipping, with a probability of 0.5. For the anchor setting, we tile anchors of {16, 32}, {64, 128}, and {256, 512} on the feature maps of stride 8, 16, and 32, respectively. The anchor ratio is set as 1.0. In this paper, we employ Adaptive Training Sample Selection (ATSS)
(Zhang et al., 2020b) for positive anchor matching. In the detection head, weight sharing and Group Normalization (Wu & He, 2018) are used. The losses of classification and regression branches are Generalized Focal Loss (GFL) (Li et al., 2020) and DIoU loss (Zheng et al., 2020), respectively.
Our experiments are implemented in PyTorch, based on the open-source MMDetection (Chen et al., 2019a). We adopt the SGD optimizer (momentum 0.9, weight decay 5e-4) with a batch size of 8×8 and train on eight Tesla V100. The learning rate is linearly warmed up to 0.015 within the first 3 epochs. During network search, the learning rate is multiplied by 0.1 at the 55-th, and 68-th epochs. The learning process terminates on the 80-th epoch. For training of both baselines and searched configurations, the learning rate decays by a factor of 10 at the 440-th and 544-th epochs, and the learning process terminates at the 640-th epoch. All the models are trained from scratch without any pre-training.
Testing. For fair comparisons with other methods, we employ three testing strategies, including single-scale VGA resolution (640 × 480), single-scale original resolution, and multi-scale testing. The results of DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), TinaFace (Zhu et al., 2020), Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b) are reported by testing the released models, while the HAMBox (Liu et al., 2020) and BFBox (Liu & Tang, 2020) models are shared from the author.
4.2 ABLATION STUDY
In Tab. 1, we present the performance of models on the WIDER FACE dataset by gradually including the proposed computation and sample redistribution methods. Our manually-designed baseline model, ResNet-2.5GF, gets APs of 91.87%, 89.49%, and 67.32% under three validation scenarios.
Computation redistribution. After separately employing the proposed computation redistribution on the backbone and the whole detector, the AP on the hard set improves to 69.78% and 70.98%. This indicates that (1) the network structure directly inherited from the classification task is suboptimal for the face detection task, and (2) joint computation reallocation on the backbone, neck and head outperforms computation optimization applied only on the backbone. Furthermore, the proposed two-step computation redistribution strategy achieves AP of 71.37%, surpassing one-step computation reallocation on the whole detector by 0.39%. As we shrink the whole search space by the proposed two-step strategy and our random model sampling number is fixed at 320, the twostep method is possible to find better network configurations from the large search space. In Tab. 1, we also compare our method with the single path one-shot NAS method (BFBox (Liu & Tang, 2020)) and the evolutionary search method (Appendix A.2), under the constraint of 2.5 GFlops. BFBox aims to design a face-appropriate search space by combing some excellent block designs, such as bottleneck block, densenet block and shufflenet block. However, such a combination generates a complex and redundant search space, which inevitably involves a vast body of low-performance candidate architectures. The evolutionary approach iteratively adopts mutations and crossover to gradually generate better architecture candidates from the randomly initialized search space, which also contains a large number of under-performing architectures. By contrast, CRFD-2.5GF utilizes an empirical bootstrap to estimate the optimized computation distribution of the best-performing architecture candidates, which directly eliminates the low-quality architectures from the initialized search
space. Therefore, CRFD-2.5GF can obviously outperform the BFBox and evolutionary method by 1.96% and 1.75% on the hard track.
Sample redistribution. For scale augmentation, we first manually extend the default scale set {0.3, 0.45, 0.6, 0.8, 1.0} by adding larger scales {1.2, 1.4, 1.6, 1.8, 2.0}. By adding this hand-crafted sample redistribution, the hard set APs significantly increase by 7.15% for the baseline and 6.5% for the proposed CRFD, indicating the benefit from allocating more training samples on the feature map of stride 8. By employing the optimized scale augmentation from searching, the hard set AP further increases by 0.48% for the proposed CRFD. For SCRFD-2.5GF, the best scale augmentation set searched is {0.5, 0.7, 0.8, 1.0, 1.1, 1.2, 1.4, 1.5, 1.8, 2.0, 2.3, 2.6}. As we can see from these discrete scales, faces around the original scale are preferred for training, along with an appropriate probability and ratio of zooming-out.
4.3 COMPUTATION REDISTRIBUTION ACROSS DIFFERENT COMPUTE REGIMES
Besides the complexity constraint of 2.5 GFlops, we also utilize the same two-step computation redistribution method to explore the network structure optimization for higher compute regimes (e.g. 10 GFlops and 34 GFlops) and lower compute regimes (e.g. 0.5 GFlops and 1.0 GFlops). In Fig. 5, we show the computation redistribution and the optimized network structures under different computation constraints.
Our final architectures have almost the same flops as the baseline networks. From these redistribution results, we can draw the following conclusions: (1) more computation is allocated in the backbone and the computation on the neck and head is compressed; (2) more capacity is reallocated in shallow stages due to the specific scale distribution on WIDER FACE; (3) for the high compute regime (e.g. 34 GFlops), the explored structure utilizes the bottleneck residual block and we observe significant depth scaling, instead of width scaling in shallow stages. Scaling the width is subject to over-fitting due to the larger increase in parameters (Bello et al., 2021). By contrast, scaling the depth, especially in the earlier layers, introduces fewer parameters compared to scaling the width; (4) for the mobile regime (0.5 GFlops), allocating the limited capacity in the deep stage (e.g. C5) for the discriminative features captured in the deep stage, can benefit the shallow small face detection by the top-down neck pathway.
4.4 ACCURACY AND EFFICIENCY COMPARISONS ON WIDER FACE
As shown in Tab. 2 and Tab. 3, we compared the proposed SCRFD with other state-of-the-art face detection algorithms (e.g. DSFD (Li et al., 2019), RetinaFace (Deng et al., 2020b), BFBox (Liu & Tang, 2020), HAMBox (Liu et al., 2020) and TinaFace (Zhu et al., 2020)) as well as light-weight face methods (e.g. Faceboxes (Zhang et al., 2017a), libfacedetection (Feng et al., 2021) and LFFD (He et al., 2019b)). Overall, all of the proposed SCRFD models provide considerable improvements compared to the hand-crafted baseline models (e.g. ResNet-2.5GF and MobileNet-0.5GF), by optimizing the network structure as well as the scale augmentation, across a wide range of compute regimes.
When we fix the testing scale at 640 as in Tab. 2, the proposed SCRFD-34GF outperforms all these state-of-the-art methods on the three subsets, especially for the hard track, which contains a large number of tiny faces. More specifically, SCRFD-34GF surpasses TinaFace by 4.78% while being more than 3× faster on GPUs. In addition, the computation cost of SCRFD-34GF is only around 20% of TinaFace. As SCRFD-34GF scales the depth in the earlier layers, it also introduces fewer parameters, resulting in a much smaller model size (9.80M ). Compared to the hand-crafted baseline (ResNet-34GF), the proposed computation redistribution and sample redistribution improve the AP by 1.27% and 0.92%, indicating the superiority of SCRFD over manual designs. Compared to the single path one-shot NAS method, SCRFD-34GF outperforms BFBox by 15.81%, while using a more compact model size. As the search space of BFBox is complex, there exists a large number of low-performance architectures. In addition, BFBox only searches the backbone and neck without considering the optimization on the head. For multi-scale testing, SCRFD-34GF slightly outperforms TinaFace but consumes much less computation. For the low-compute regimes in Tab. 3, SCRFD-0.5GF significantly outperforms RetinaFace-MobileNet0.25 by 21.19% on the hard AP, while consuming only 63.34% computation and 45.57% inference time under the VGA resolution. When the evaluation is conducted on the original image, SCRFD-0.5GF surpasses LFFD by 3.6% on the hard AP, while consuming only 5.5% flops.
5 CONCLUSIONS
In this work, we present a sample and computation redistribution paradigm for efficient face detection. Our results show significantly improved accuracy and efficiency trade-off by the proposed SCRFD across a wide range of compute regimes, when compared to the current state-of-the-art.
Acknowledgements. We would like to thank Hui Ni from Tencent for preparing the mobile demo of SCRFD https://github.com/nihui/ncnn-android-scrfd. Stefanos Zafeiriou acknowledges support from the EPSRC Fellowship DEFORM (EP/S010203/1), FACER2VM (EP/N007743/1) and a Google Faculty Fellowship.
A APPENDIX
A.1 TINAFACE REVISITED
Based on RetinaNet (Lin et al., 2017a), TinaFace (Zhu et al., 2020) employs ResNet-50 (He et al., 2016) as backbone, and Feature Pyramid Network (FPN) (Lin et al., 2017a) as neck to construct the feature extractor. For the head design, TinaFace first uses a feature enhancement module on each feature pyramid to learn surrounding context through different receptive fields in the inception block (Szegedy et al., 2015). Then, four consecutive 3 × 3 convolutional layers are appended on each feature pyramid. Focal loss (Lin et al., 2017b) is used for the classification branch, DIoU loss (Zheng et al., 2020) for the box regression branch and cross-entropy loss for the IoU prediction branch.
To detect tiny faces, TinaFace tiles anchors of three different scales, over each level of the FPN (i.e. {24/3, 25/3, 26/3} × {4, 8, 16, 32, 64, 128}, from level P2 to P7). The aspect ratio is set as 1.3. During training, square patches are cropped from the original image and resized to 640× 640, using a scaling factor randomly sampled from [0.3, 0.45, 0.6, 0.8, 1.0], multiplied by the length of the original image’s short edge. During testing, TinaFace employs single scale testing, when the short and long edges of the image do not surpass [1100, 1650]. Otherwise, it employs with short edge scaling at [500, 800, 1100, 1400, 1700], shift with directions [(0, 0), (0, 1), (1, 0), (1, 1)] and horizontal flip.
As shown in Fig. 6(a) and Tab. 4, we compare the performance of TinaFace under different testing scales. For the multi-scale testing, TinaFace achieves an impressive AP of 93.4%, which is the current best performance on the WIDER FACE leader-board. For large single-scale testing (1650), the AP slightly drops at 93.0% but the computation significantly decreases to 1021.82 GFlops. On the original scale (1024), the performance of TinaFace is still very high, obtaining an AP of 91.4% with 508.47 GFlops. Moreover, when the testing scale decreases to VGA level (640), the AP significantly reduces to 81.4%, with the computation further decreasing at 172.95 GFlops.
In Fig. 6(b), we illustrate the computation distribution of TinaFace on the backbone, neck and head components with a testing scale of 640. From the view of different scales of the feature pyramid, the majority of the computational costs (about 68%) are from stride 4, as the resolution of feature map is quite large (120×160). From the view of the different components of the face detector, most of the computational costs (about 79%) are from the head, since the backbone structure is directly borrowed from the ImageNet classification task (Deng et al., 2009), without any modification.
Even though TinaFace achieves state-of-the-art performance on tiny face detection, the heavy computational cost renders it unsuitable for real-time applications.
A.2 DETAILS OF EVOLUTIONARY BASELINE
To compare the proposed SCRFD with the other network search methods in Tab. 1, we design the evolutionary baseline (Real et al., 2019) as follows:
1. A population of networks P are randomly initialized. We set |P| = 50. 2. Each network architecture from P is trained on the WIDER FACE training data and then
the APs on the WIDER FACE validation dataset are tested.
3. Architectures with top performance are selected from P as parents P . To generate child networks C, we employ the mutation and crossover policies. Here, we set |P| = 10 and |C| = 50.
4. Each network architecture from C is trained on the WIDER FACE training data and then the APs on the WIDER FACE validation dataset are calculated.
5. The worst 50 individuals from the populations of P ∪C are dropped and then we get the new evolutionary population P.
6. We repeat steps 3, 4 and 5 for 20 times, resulting in 1000 network architectures as well as their validation APs. The architecture with the highest AP is selected as the final result.
A.3 ALGORITHM OF COMPUTATION REDISTRIBUTION
In Algorithm 1, we show the details of the proposed two-step computation redistribution method.
Algorithm 1: Search algorithm for computation redistribution Input: Constraint of computation cost Y (in GFlops); Number of random network
architectures N ; Dataset for training Dtrain and validation Dval; Evaluation metric AP . Output: Best architecture A∗ Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({di, wi}) ; /* di and wi denote block number and channel number, i = 2, 3, 4, 5. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR1 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Initialize the architecture set A = Ø while length(A) < N do
net = RandomSampling({CR1, n,m, h}) ; /* n,m, and h denote channel in neck, block and channel in head. */ if net.F lops ≤ 1.02 ∗ Y And net.F lops ≥ 0.98 ∗ Y then A.Append(net)
end end ParallelTrain(A, Dtrain) CR2 = Bootstrap(A, APs) | APs = Evaluate(A, Dval) Output: Best architecture A∗ = choose top1(CR2,A).
A.4 DETAILED NETWORK CONFIGURATIONS
In Tab. 5, we give the detailed network configurations for baselines and the proposed CRFD across different compute regimes.
A.5 STATISTICS AFTER SAMPLE REDISTRIBUTION
As illustrated in Fig. 7(a), there are more faces below the scale of 32 after the proposed automatic scale augmentation strategy is used. Moreover, even though there will be more extremely tiny faces (e.g. < 4 × 4) under the proposed scale augmentation, these ground-truth faces will be neglected during training due to unsuccessful anchor matching. As shown in Fig. 7(b), positive anchors within one epoch significantly increase at the scale of 16 and 32. With more training samples redistributed to the small scale, the branch to detect tiny faces can be trained more adequately.
A.6 DATASETS
WIDER FACE The WIDER FACE dataset (Yang et al., 2016) consists of 32, 203 images and 393, 703 face bounding boxes with a high degree of variability in scale, pose, expression, occlusion and illumination. The WIDER FACE dataset is split into training (40%), validation (10%) and testing (50%) subsets by randomly sampling from 61 scene categories. Based on the detection rate of EdgeBox (Zitnick & Dollár, 2014), three levels of difficulty (i.e. Easy, Medium and Hard) are defined by incrementally incorporating hard samples.
AFW The AFW dataset (Zhu & Ramanan, 2012) contains 205 high-resolution images with 473 faces (Mathias et al., 2014) collected from Flickr. Images in this dataset contain cluttered backgrounds with large variations in viewpoint.
PASCAL The PASCAL face dataset (Mathias et al., 2014) is collected from the PASCAL 2012 person layout subset, includes 1, 335 labeled faces in 851 images with large facial appearance and pose variations (e.g. large in-plane rotation).
FDDB The FDDB dataset (Jain & Learned-Miller, 2010) is a collection of labeled faces from Faces in the Wild dataset. It contains a total of 5, 171 face annotations on 2, 845 images. The dataset incorporates a range of challenges, including difficult pose angles, out-of-focus faces and low-resolution.
A.7 CROSS DATASET EVALUATION AND VISUALIZATION
Besides the evaluation on the WIDER FACE (Yang et al., 2016) data set, we also conduct cross dataset evaluation and test the proposed SCRFD models on AFW (Zhu & Ramanan, 2012), PASCAL (Mathias et al., 2014) and FDDB (Jain & Learned-Miller, 2010), under the VGA resolution. As shown in Fig 8, SCRFD-34GF achieves 99.945% AP on AFW, 99.597% AP on PASCAL, and 99.25% on FDDB, surpassing BFBox (Liu & Tang, 2020) and HAMBox (Liu et al., 2020). Even though the face scale distributions on these three datasets are different from WIDER FACE, the proposed SCRFD-34GF still obtains state-of-the-art performance across different datasets, showing impressive robustness of the proposed computation and sample redistribution approaches. In addition, SCRFD-2.5GF also obtains impressive performance on different datasets with much lower computation cost (99.821% AP on AFW, 98.911% AP on PASCAL, and 99.02% AP on FDDB).
Fig. 9 shows qualitative results generated by SCRFD-2.5GF. As can be seen, our face detector works very well in both indoor and outdoor crowded scenes under different conditions (e.g. appearance variations from pose, occlusion and illumination). The impressive performance across a wide range of scales indicate that SCRFD-2.5GF has a very high recall and can detect faces accurately even without large scale testing. | 1. What are the main contributions and strengths of the paper regarding face detection?
2. What are the weaknesses or limitations of the paper, particularly regarding the presentation and comparison with other works?
3. Do you have any questions about the proposed method, such as its applicability to other applications or the effectiveness of the scale search strategy?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
This paper presents a face detection method that aims to deal with two challenges in unconstrained face detection: high computation cost and detecting small faces. Specifically, the authors adopted a network structure search method along with a two-step searching strategy for computation redistribution between the backbone, neck and head of the network. Sample redistribution between the scales is achieved using a searchable zoom-in and zoom-out space for face scale augmentation. Experiments were conducted on a benchmark dataset: WIDER FACE.
Review
Strength: • A two-stage searching strategy is helpful to find the optimal structure in the large search space. • The proposed computation redistribution method is general and can be adopted by other application. • The scale search strategy is helpful to handle scale variations. • The proposed method yields the detection performance comparable to the SOTA face with lower computation cost.
Weakness • It lacks a figure to illustrate the overview of the whole framework. • An algorithm is helpful to describe the proposed computation redistribution method. • It needs more details in sample distribution. It is not clear how to “employ random search under the estimation metric of AP … across different computation regimes”. In addition to the purpose of shrinking the scale search space, does the binary probability have other advantage over the other probability model? • Cross dataset evaluation was conducted, and the overall APs were reported for AFW, Pascal, and FDDB datasets, respectively. However, it is not sufficient to claim SCRFD-2.5GF obtains SOTA performance without a comparison with the SOTA. For example, BFBox has a better performance (AP=99.43) on the Pascal dataset; HAMBox has a better performance on all these three datasets: AP=99.9 on AFW, AP=99.5 on Pascal, and 0.991 on FDDB. The authors discussed both BFBox and HAMBox in related work, but only compared with HAMBox on the WIDER FACE. In addition, Figure 5 is shown without an explanation in the text. |
ICLR | Title
Mixture Representation Learning with Coupled Autoencoders
Abstract
Latent representations help unravel complex phenomena. While continuous latent variables can be efficiently inferred, fitting mixed discrete-continuous models remains challenging despite recent progress, especially when the discrete factor dimensionality is large. A pressing application for such mixture representations is the analysis of single-cell omic datasets to understand neuronal diversity and its molecular underpinnings. Here, we propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE that significantly outperforms state-of-the-art in high-dimensional discrete settings. cpl-mixVAE introduces a consensus constraint on discrete factors of variability across the networks, which regularizes the mixture representations at the time of training. We justify the use of this framework with theoretical results and validate it with experiments on benchmark datasets. We demonstrate that our approach discovers interpretable discrete and continuous variables describing neuronal identity in two single-cell RNA sequencing datasets, each profiling over a hundred cortical neuron types.
1 INTRODUCTION
Fitting mixed discrete-continuous models arises in many contexts. While continuous latent variables can be efficiently inferred with variational and adversarial formulations, inference of continuous and discrete factors in generalized mixture models remains challenging despite recent progress (Jang et al., 2016; Chen et al., 2016; Dupont, 2018; Jeong & Song, 2019). A pressing domain of application for such models is quantifying factors of biological variability in single-cell omic studies. The high-throughput and high-dimensional datasets produced by these studies document a previously unappreciated diversity of gene expression. In neuroscience, this poses the identification of cell types and cell states as a key research area to understand how neuronal circuits function (Bargmann et al., 2014), where the notions of cell types and states can be considered as biological interpretations of discrete and continuous variability. While marker gene based studies suggest the existence of more than 100 neuronal cell types in just a single brain region, there is no agreement on such categorization and interpretation of the remaining continuous variability (Seung & Sümbül, 2014; Zeng & Sanes, 2017; Tasic et al., 2018). Moreover, existing unsupervised, joint continuous-discrete learning methods are tailored for problems with relatively few and equally-abundant discrete components, and accurate inference remains out of reach for these applications.
Deep generative models have previously been applied to single-cell datasets, where the focus is on the cluster identity and it is typically inferred by post-hoc analysis of a continuous factor (Lopez et al., 2018). Deep Gaussian mixture models (Dilokthanakul et al., 2016; Johnson et al., 2016; Jiang et al., 2017) also focus on the identification of categories and do not take interpretability of the remaining continuous variability into account. To address the need for joint inference of interpretable discrete and continuous factors, various adversarial and variational methods have been proposed. While existing adversarial generative models, e.g. InfoGAN (Chen et al., 2016), are susceptible to stability issues (Higgins et al., 2017; Kim & Mnih, 2018), variational autoencoders (VAEs) (Kingma & Welling, 2013) emerge as efficient and more stable alternatives (Tschannen et al., 2018; Zhang et al., 2018; Dupont, 2018; Jeong & Song, 2019). VAE-based approaches approximate the mixture model by assuming a family of distributions qφ and select the member closest to the true model p. Popular choices in VAE implementations include (1) using KL divergence to compute discrepancy between qφ and p, and (2) using a multivariate Gaussian mixture distribution with uniformly distributed discrete and isotropic Gaussian distributed continuous priors. However, such choices may lead to
underestimating the posterior variance (Minka et al., 2005; Blei et al., 2017). Solutions to resolve this issue are mainly applicable in low-dimensional spaces or for continuous factors alone (Deasy et al., 2020; Kingma et al., 2016; Ranganath et al., 2016; Quiroz et al., 2018).
Inspired by collective decision making, we introduce a variational framework using multiple autoencoding arms to jointly infer interpretable finite discrete (categorical) and continuous factors in the presence of high-dimensional discrete space. Coupled-autoencoders have been previously studied in the context of multi-modal recordings, where each arm learns only a continuous latent representation for one of the data modalities (Feng et al., 2014; Gala et al., 2019; Lee & Pavlovic, 2020). Here, we develop a novel pairwise-coupled autoencoder framework for a single data modality. The proposed framework imposes a consensus constraint on the categorical posterior at the time of training and allows dependencies between continuous and categorical factors. We define the consensus constraint based on the Aitchison geometry in the probability simplex, which avoids the mode collapse problem. We show that the coupled multi-arm architecture enhances accuracy, robustness, and interpretability of the inferred factors without requiring any priors on the relative abundances of categories. Finally, on datasets profiling different cortical regions in the mammalian brain, we show that our method can be used to discover neuronal types as discrete categories and type-specific genes regulating the continuous within-type variability, such as metabolic state or disease state.
Related work. There is an extensive body of research on clustering in mixture models (Dilokthanakul et al., 2016; Jiang et al., 2017; Tian et al., 2017; Guo et al., 2016; Locatello et al., 2018b). The idea of improving the clustering performance through seeking a consensus and co-training and ensembling across multiple observations has been explored in both unsupervised (Monti et al., 2003; Kumar & Daumé, 2011) and semi-supervised contexts (Blum & Mitchell, 1998). However, these methods do not consider the underlying continuous variabilities across observations. Moreover, unlike ensemble methods, which pool the results of different trained workers, autoencoding arms seek a consensus at the time of learning in our framework.
The proposed framework does not need any supervision since the individual arms provide a form of prior or weak supervision for each other. In this regard, our paper is related to a body of work that attempts to improve representation learning by using semi-supervised or group-based settings (Bouchacourt et al., 2017; Hosoya, 2019; Nemeth, 2020). Bouchacourt et al. (2017) demonstrated a multi-level variational autoencoder (MLVAE) as a semi-supervised VAE by revealing that observations within groups share the same type. Hosoya (2019) and Nemeth (2020) attempted to improve MLVAE by imposing a weaker condition to the grouped data. In recent studies (Shu et al., 2019; Locatello et al., 2020), a weakly supervised variational setting has been proposed for disentangled representation learning by providing pairs of observations that share at least one underlying factor. These studies rely on learning latent variables in continuous spaces, and have been applied only to image datasets with low-dimensional latent representations.
Recent advances in structured variational methods, such as imposing a prior (Ranganath et al., 2016) or spatio-temporal dependencies (Quiroz et al., 2018) on the latent distribution parameters, allow for scaling to larger dimensions. However, these solutions are not directly applicable to the discrete space, which will be addressed in our A-arm VAE framework.
2 SINGLE MIXTURE VAE FRAMEWORK
For an observation x ∈ RD, a VAE learns a generative model pθ (x|z) and a variational distribution qφ (z|x), where z ∈ RM is a latent variable with a parameterized distribution p(z) and M D (Kingma & Welling, 2013). Disentangling different sources of variability into different dimensions of z enables an interpretable selection of latent factors (Higgins et al., 2017; Locatello et al., 2018a). However, the interplay between continuous and discrete variabilities present in many real-world datasets is often overlooked by existing methods. This problem can be addressed within the VAE framework in an unsupervised fashion by introducing a categorical latent variable c denoting the class label, alongside the continuous latent variable s. We refer to the continuous variable s as the state or style variable interchangeably. Assuming s and c are independent random variables, the evidence lower bound (ELBO) (Blei et al., 2017) for a single mixture VAE with the distributions parameterized by θ and φ is given by,
L(φ,θ) = Eqφ(s,c|x) [log pθ(x|s, c)] − DKL (qφ(s|x)‖p(s)) − DKL (qφ(c|x)‖p(c)) . (1) Maximizing ELBO in Eq. 1 imposes characteristics on q(s|x) and q(c|x) that can result in underestimation of posterior probabilities such as the mode collapse problem, where the network ignores a
subset of latent variables (Minka et al., 2005; Blei et al., 2017). Recently, VAE-based solutions were proposed by imposing a uniform structure on p(c): akin to β-VAE (Higgins et al., 2017; Burgess et al., 2018), JointVAE (Dupont, 2018) modifies the ELBO by assigning a pair of controlled information capacities for each variational factor, i.e. Cs ∈ R|s| and Cc ∈ R|c|. The main drawback of JointVAE is that its performance is tied to heuristic tuning of |s|× |c| capacities over training iterations so that it is vulnerable to mode collapse in high-dimensional settings. Another recent VAE-based mixture model solution, CascadeVAE (Jeong & Song, 2019), maximizes the ELBO through a semi-gradient-based algorithm by iterating over two separate optimizations for the continuous and categorical variables. While the separation of the optimization steps avoids the mode collapse problem, this separation is valid only when the categorical variable is uniformly distributed. Therefore, its performance strongly depends on the clusters having similar abundances in the dataset. Thus, earlier solutions fall short of learning interpretable mixture representations with high-dimensional discrete variables in real-world applications.
In addition to the issues discussed above, the performance and interpretability of those approaches are further limited by the common assumption that the continuous variable representing the style of the data is independent of the categorical variable. In practice, style often depends on the class label. For instance, even for the well-studied MNIST dataset, the histograms of common digit styles, e.g. “width”, markedly vary for different digits (Supplementary Section I). Moreover, further analysis of the identified continuous factor in the earlier approaches reveals that the independence assumption among q(s|x) and q(c|x) can be significantly violated (see Supplementary Sections H and I).
3 COUPLED MIXTURE VAE FRAMEWORK
The key intuition behind multi-arm networks is cooperation to improve posterior estimation. While the context is different, the popular phrase “wisdom of the crowd” (Surowiecki, 2005) can nevertheless be revealing: when a crowd (multiple arms) needs to make a decision, multiple estimates can increase the expected probability of a correct choice.
3.1 A-ARM VAE FRAMEWORK
We define the A-arm VAE as an A-tuple of independent and architecturally identical autoencoding arms, where the a-th arm parameterizes a mixture model distribution (Fig. 1a). In this framework, individual arms receive a collection of non-identical copies, {xa,xb, . . .} of the given sample, x, belonging to the same category. While each arm has its own mixture representation with potentially non-identical parameters, all arms cooperate to learn q(ca|xa), where ca = cb = · · · , via a cost function at the time of training. Accordingly, a crowd of VAEs with A arms can be formulated as a collection of constrained variational objectives as follows.
max Ls1|c1(φ1,θ1) + · · ·+ LsA|cA(φA,θA) s.t. c1 = · · · = cA
(2)
where Lsa|ca(φa,θa) is the variational loss for arm a,
Lsa|ca(φa,θa) = Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] − Eq(sa|ca,xa) [DKL (q(ca|xa)‖p(ca))] . (3)
In Eq. 3, the variational loss for each arm is defined according to the graphical model in Fig. 1b, which is built upon the traditional ELBO in Eq. 1 by conditioning the continuous state on the categorical variable (derivation in Supplementary Section B). Therefore, learning an interpretable decomposition of the data relies on accurate assignment (inference) of the categorical latent factor. Propositions 1 and 2 below show that the shared categorical assignment inferred from q(c|x1, · · · ,xA), under the
c = c1 = · · · = cA constraint of the multi-arm framework improves the accuracy of the categorical assignment on expectation. Proposition 1. Consider the problem of mixture representation learning in a multi-arm VAE framework. For independent samples from category m, i.e. xi ∼ p(x|m),
Eq(x|m) [log q(c = m|{xi}1:A)] > Eq(x|m) [log q(c = m|{xi}1:B)] s.t. c = c1 = · · · = cA s.t. c = c1 = · · · = cB (4)
if q(m|xi) < 1 and A > B ≥ 1 denote the number of arms. (Proof in Supplementary Section A)
Thus, having more arms increases the expected log posterior for the true categorical latent variable unless it is already at its maximum. Proposition 2. In the A-arm VAE framework, there exists an A that guarantees a true categorical assignment on expectation. That is,
m = arg max c
Eq(x|m) [log q(c|{xi}1:A)] , s.t. c = c1 = · · · = cA . (5)
(Proof in Supplementary Section A) Accordingly, the consensus constraint is sufficient to enhance inference for mixture representations in the A-arm VAE framework. Our theoretical results show that the required number of arms satisfying Eq. 5 is a function of the categorical distribution and the likelihood (Eq. 15, Supplementary Section A). In the particular case of uniformly distributed categories, one pair of coupled arms is enough to satisfy Eq. 5 (see Corollary 1, Supplementary Section A).
We emphasize that the proposed framework does not require any weak supervision as in (Bouchacourt et al., 2017). Instead, it relies on representations that are invariant under non-identical copies of observations. Moreover, unlike (Bouchacourt et al., 2017; Shu et al., 2019; Locatello et al., 2020), the multi-arm framework is not restricted to the continuous space.
Arms observe non-identical copies of samples. In the A-arm VAE framework, arms receive non-identical observations that share the discrete variational factor. To achieve this in a fully unsupervised setting, we use type-preserving data augmentation that generates independent and identically distributed copies of data while preserving its categorical identity. For image datasets, conventional transformations such as rotation, scaling, or translation can serve as type-preserving augmentations. However, for non-image datasets, e.g. single-cell data, we seek a generative model that learns transformations representing within-class variability in an unsupervised manner. To this end, inspired by DAGAN (Antoniou et al., 2017) and VAE-GAN (Larsen et al., 2016), we develop a generative model to provide collections of observations for our multi-arm framework (Supplementary Section F). The proposed generative model learns to generate augmented samples in the vicinity of given samples in the latent space, without knowing their types (Eq. 70). In Supplementary Section A, Remark 2, we further discuss an under-exploration scenario in data augmentation, in which the augmented samples are not independently distributed and are concentrated around the given sample.
3.2 CPL-MIXVAE: PAIRWISE COUPLING IN A-ARM VAE
In the A-arm VAE framework, the mixture representation is obtained through the optimization in Eq. 2. Not only is it challenging to solve the maximization in Eq. 2 due to the equality constraint, but the objective remains a function of p(c) which is unknown, and typically non-uniform. To overcome this, we use an equivalent formulation for Eq. 2 by applying the pairwise coupling paradigm as follows (details of derivation in Supplementary Section C):
max
A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) −∑
a<b
Eq(sa|ca,xa)Eq(sb|cb,xb) [DKL (q(ca|xa)q(cb|xb)‖p(ca, cb))]
s.t. ca = cb ∀a, b ∈ [1, A], a < b (6) We relax the optimization in Eq. 6 into an unconstrained problem by marginalizing the joint distribution over a mismatch measure between categorical variables (see Supplementary Section D):
max A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) +∑
a<b
H(ca|xa) +H(cb|xb)− λEq(ca,cb|xa,xb) [ d2(ca, cb) ] (7)
In Eq. 7, in addition to entropy-based confidence penalties known as mode collapse regularizers (Pereyra et al., 2017), the distance measure d(ca, cb) encourages a consensus on the categorical assignment controlled by λ ≥ 0, the coupling hyperparameter. We refer to the model in Eq. 7 as cpl-mixVAE (Fig. 1a). In cpl-mixVAE, VAE arms try to achieve identical categorical assignments while independently learning their own style variables. In experiments, we set λ = 1 universally. While the bottleneck architecture already encourages interpretable continuous variables, this formulation can be easily extended to include an additional hyperparameter to promote disentanglement of continuous variables as in β-VAE (Higgins et al., 2017). Additional analyses to assess the sensitivity of the cpl-mixVAE’s performance to its coupling factor can be found in the Supplementary Section G.
It may be instructive to cast Eq. 7 in an equivalent constrained optimization form.
Remark 1. The A-arm VAE framework is a collection of constrained variational models as follows:
max A∑ a=1 Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] +H(ca|xa)
s.t. Eq(ca|xa) [ d2(ca, cb) ] < (8)
where denotes the strength of the consensus constraint. Here, cb indicates the assigned category by any one of the arms, b ∈ {1, . . . , A}, imposing structure on the discrete variable to approximate its prior distribution.
Distance between categorical variables. d(ca, cb) denotes the distance between a pair of |c|dimensional un-ordered categorical variables, which are associated with probability vectors with non-negative entries and sum-to-one constraint that form a K-dimensional simplex, where K = |c|. In the real space, a typical choice to compute the distance between two vectors is using Euclidean geometry. However, this geometry is not suitable for probability vectors. Here, we utilize Aitchison geometry (Aitchison, 1982; Egozcue et al., 2003), which defines a vector space on the simplex. Accordingly, the distance in the simplex, i.e. dSK (ca, cb) is defined as dSK (ca, cb) = ‖clr(ca)− clr(cb)‖2, ∀ca, cb ∈ SK , where clr(·) denotes the isometric centered-log-ratio transformation in the simplex. This categorical distance satisfies the conditions of a mathematical metric according to Aitchison geometry.
3.3 SEEKING CONSENSUS IN THE SIMPLEX
An instance of the mode collapse problem (Lucas et al., 2019) manifests itself in the minimization of dSK (ca, cb) (Eq. 7): its trivial local optima encourages the network to abuse the discrete latent factor by ignoring many of the available categories. In the extreme case, the representations can collapse onto a single category; ca = cb = c0. In this scenario, the continuous variable is compelled to act as a primary latent factor, while the model fails to deliver an interpretable mixture representation despite achieving an overall low loss value. To avoid such undesirable local equilibria while training, we add perturbations to the categorical representation of each arm. If posterior probabilities in the simplex have small dispersion, the perturbed distance calculation overstates the discrepancies. Thus, instead of minimizing d2SK (ca, cb), we minimize a perturbed distance
d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
, which corresponds to the distance between additively perturbed ca and cb vectors in Aitchison geometry. Here, σ2ak and σ 2 bk
indicate the mini-batch variances of the k-th category, for arms a and b. We next show that the perturbed distance dσ(·) is bounded by dSK (·) and non-negative values ρu, ρl: Proposition 3. Suppose ca, cb ∈ SK , where SK is a simplex ofK > 0 parts. If dSK (ca, cb) denotes the distance in Aitchison geometry and d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
denotes a perturbed distance, then
d2SK (ca, cb)− ρl ≤ d 2 σ (ca, cb) ≤ d2SK (ca, cb) + ρu
where ρu, ρl ≥ 0, ρu = K ( τ2σu + τ 2 c ) + 2∆στc, ρl = ∆2σ K −Kτ2σl , τc = maxk {log cak − log cbk}, τσu = max k {gk}, τσl = max k {−gk}, ∆σ = ∑ k gk, and gk = (σ−1ak − 1) log cak − (σ −1 bk − 1) log cbk . (Proof in Supplementary Section E)
Thus, when ca and cb are similar and their spread is not small, dσ(ca, cb) closely approximates dSK (ca, cb). Otherwise, it diverges from dSK (·) to avoid mode collapse.
4 EXPERIMENTS
We used four datasets: dSprites, MNIST, and two single-cell RNA sequencing (scRNA-seq) datasets; Smart-seq ALM-VISp (Tasic et al., 2018) and 10X MOp (Yao et al., 2021). Although dSprites and MNIST datasets do not require high-dimensional settings for mixture representation, to facilitate comparisons of cpl-mixVAE with earlier methods, first we report the results for these benchmark datasets. We trained three unsupervised VAE-based methods for mixture modeling: JointVAE (Dupont, 2018), CascadeVAE (Jeong & Song, 2019), and ours (cpl-mixVAE). For MNIST, we additionally trained the popular InfoGAN (Chen et al., 2016) as the most comparable GAN-based model. To show the interpretability of the mixture representations, (i) for the discrete latent factor, we report the accuracy (ACC) of categorical assignments and the DKL(q(c)‖p(c))), (ii) for the continuous variable, we perform latent traversal analysis by fixing the discrete factor and changing the continuous variable according to p(s|c,x). We calculated the accuracy by using minimum weight matching (Kuhn, 1955) to match the categorical variables obtained by cpl-mixVAE with the available cluster labels for each dataset. Additionally, we report the computational efficiency (number of iterations per second) to compare the training complexity of the multi-arm framework against earlier methods (Table 1). All reported numbers for cpl-mixVAE models are average accuracies calculated across arms. In VAE-based models, to sample from q(ca|xa), we use the Gumbel-softmax distribution (Jang et al., 2016; Maddison et al., 2014). In cpl-mixVAE, each arm received an augmented copy of the original input generated by the deep generative augmenter (Supplementary Section F) during training. Details of the network architectures and training settings can be found in Supplementary Section L.
4.1 BENCHMARK DATASETS
dSprites. dSprites is procedurally generated from discrete (3 shapes) and continuous (6 style factors: scale, rotation, and position) latent factors. Based on the uniform distribution of classes, we used a 2-arm cpl-mixVAE with |c| = 3 and |s| = 6 to learn interpretable representations. Results in Table 1 show that our method outperforms the other methods in terms of categorical assignment accuracy. In addition to demonstrating the traversal results (Fig. 2, bottom row), we report disentanglement scores (DS in Table 1). Even though the continuous factors do not depend on the discrete factors in this synthetic dataset, we did not change the architecture and expected the network to infer this independence. For a fair comparison, we used the same disentanglement metric implemented for CascadeVAE (Jeong & Song, 2019).
MNIST. Similarly, due to the uniform distribution of digit labels in MNIST, we again used a 2-arm cpl-mixVAE model. Following the convention (Dupont, 2018; Jeong & Song, 2019; Bouchacourt
et al., 2017), each arm of cpl-mixVAE uses a 10-dimensional categorical variable representing digits (type), and a 10-dimensional continuous random variable representing the writing style (state). Table 1 displays the accuracy of the categorical assignment and the discrepancy between q(c) and p(c) for InfoGAN, two 1-arm VAE methods (JointVAE and CascadeVAE), and cpl-mixVAE with 2 arms. Additionally, to isolate the impact of data augmentation in training, we trained JointVAE† and CascadeVAE† where the models were trained with the same augmented copies of the original MNIST dataset as cpl-mixVAE. The results in Table 1 suggest that data augmentation by itself does not enhance the performance. Fig. 2 (top row) illustrates the continuous latent traversals for four dimensions of the state variable inferred by cpl-mixVAE, where each row corresponds to a different dimension of the categorical variable, and the state variable monotonically changes across columns. Both results in Table 1 and Fig. 2 show that cpl-mixVAE achieved an interpretable mixture representation with the highest categorical assignment accuracy.
Summary. cpl-mixVAE improves the discrete density approximation and infers better mixture representations. It outperforms earlier methods, without using extraneous optimization or heuristic channel capacities. Beyond performance and robustness, its computational cost is also comparable to that of the baselines.
4.2 SINGLE-CELL RNA SEQUENCING DATA
In this dataset the observations are individual cells and each observation consists of expressions of thousands of genes. Here, we used two scRNA-seq datasets: (i) Smart-seq ALM-VISp (Tasic et al., 2018) and (ii) 10X MOp (Yao et al., 2021). The Smart-seq dataset includes transcriptomic profiles of more than 10, 000 genes for ∼ 22, 000 cells from the mouse anterior lateral motor cortex (ALM) and the primary visual cortex (VIPs). The 10X MOp dataset profiles ∼ 123000 cells in the mouse primary motor cortex (MOp) with the droplet-based 10X Genomics Chromium platform. 10X-based data often display more gene dropouts, especially for genes with lower expression levels. scRNA-seq datasets are significantly more complex than typical benchmark datasets due to (i) large number of cell types (discrete variable), and (ii) class imbalance; in the 10X MOp dataset, for instance, the mostand the least-abundant cell types include 17, 000 and 20 samples, respectively. Moreover, whether the observed diversity corresponds to discrete variability or a continuum is an ongoing debate in neuroscience (Scala et al., 2020). While using genes that are differentially expressed in subsets of cells, known as marker genes (MGs) (Trapnell, 2015) is a common approach to define cell types, the identified genes rarely obey the idealized MG definition in practice. Here, we focus on neuronal cells and use a subset of 5, 000 highest variance genes. The original MG-based studies for each dataset suggested 115 (Smart-seq data) (Tasic et al., 2018) and 140 (10X-based data) (Yao et al., 2021) discrete neuronal types.
Neuron type identification. Based on the suggested taxonomies in (Tasic et al., 2018; Yao et al., 2021), for the Smart-seq ALM-VISp data, we used 115- and 2-dimensional discrete and continuous variables, and for the 10X MOp data, we used 140- and 2-dimensional discrete and continuous latent variables. We compared the suggested cell types in (Tasic et al., 2018; Yao et al., 2021) with the discrete representations that are inferred from VAE models. Table 1 and Fig. 3(a-b) demonstrate the
performance of a 2-arm cpl-mixVAE model against JointVAE and CascadeVAE. In Fig. 3a.1 and 3b.1, we observe that for both datasets, JointVAE succeeds in identifying (sub)classes of neurons, e.g. excitatory class, or Sst subclass, but not neuronal types at leaf nodes. On the other hand, CascadeVAE learns an almost uniform distribution over all types despite a sizeable difference between the relative abundances of neuronal types (Fig. 3a.2 and 3b.2). Our results in Fig. 3(a-b) clearly show that cplmixVAE outperforms JointVAE and CascadeVAE in identifying meaningful known cell types. The confusion matrices in Fig. 3a.3 and 3b.3 demonstrate that even the inaccurate categorical assignments of cpl-mixVAE are still close to the matrix diagonals, suggesting a small cophenetic distance. That is, those cells are still assigned to nearby cell types in the dendrogram.
Using A > 2. Unlike the discussed benchmark datasets, the neuronal types are not uniformly distributed. Accordingly, we also investigated the accuracy improvement for categorical assignment when more than two arms are used. Fig. 3c illustrates the accuracy improvement with respect to a single autoencoder model, i.e. JointVAE, in agreement with our theoretical findings.
Identifying genes regulating cell activity. To examine the role of the continuous latent variable, we applied a similar traversal analysis to that used for the benchmark datasets. For a given cell sample and its discrete type, we changed each dimension of the continuous variable using the conditional distribution, and inspected gene expression changes caused by continuous variable alterations. Fig. 4 shows the results of the continuous traversal study for JointVAE and cpl-mixVAE, for two excitatory neurons belonging to the “L5 NP” (cell type (I)) and “L6 CT” (cell type (II)) sub-classes in ALM and MOp regions. Note that here, JointVAE is equivalent to a 1-arm VAE, with the exception of the type dependence of the state variable. Since CascadeVAE did not learn meaningful clustering of cells, even at the subclass level, we did not consider it for the continuous factor analysis. In each sub-figure, the latent traversal is color-mapped to normalized reconstructed expression values, where the y-axis corresponds to one dimension of the continuous variable, and the x-axis corresponds to three gene subsets, namely (i) MGs for the two excitatory types, (ii) immediate early genes (IEGs), and (iii) housekeeping gene (HKG) subgroups (Hrvatin et al., 2018; Tarasenko et al., 2017). For cpl-mixVAE (Fig. 4b), the normalized expression of the reported MGs as indicators for excitatory cell types (discrete factors) is unaffected by changes of identified continuous variables. In contrast, for JointVAE (Fig. 4a), we observed that the normalized expression of some MGs (5 out of 10) are changed due to the continuous factor traversal. Additionally, we found that the expression changes inferred by cpl-mixVAE for IEGs and HKGs are essentially monotonically linked to the continuous variable, confirming that the expression of IEGs and HKGs depends strongly on the cell activity variations under different metabolic and environmental conditions. Conversely, JointVAE
fails to reveal such activity-regulated monotonicity for IEGs and HKGs. Furthermore, our results for cpl-mixVAE reveal that the expression of activity-regulated genes depends on the cell type, i.e. IEGs and HKGs respond differently to activation depending on their cell types (compare rows I and II in Fig. 4b). However, in Fig. 4a, since the baseline JointVAE does not take into account the dependency of discrete and continuous factors, it fails to reveal the dependence of activity-regulated expression to the cell type, and therefore produces identical expressions for both types (I) and (II). These findings are consistent over multiple randomly initialized runs (Supplementary Section K.1). See Supplementary Section K.2 for more results on other cell types and gene subsets.
Summary. The cpl-mixVAE model successfully identified the majority of known excitatory and inhibitory neurons in multiple cortical regions. Our findings suggest that cpl-mixVAE, by acknowledging the dependencies of continuous and categorical factors, captures relevant and interpretable continuous variability that can provide insight when deciphering the molecular mechanisms shaping the landscape of biological states, e.g. due to metabolism or disease.
4.3 ABLATION STUDIES
To elucidate the success of the A-arm VAE framework in mixture modeling, we investigate the categorical assignment performance under different training settings. Since CascadeVAE does not learn the categorical factors by variational inference, here we mainly study JointVAE (as a 1-arm VAE) and cpl-mixVAE (as a 2-arm VAE). In Section 4.1, we show that data augmentation by itself does not enhance the categorical assignment (JointVAE†). To understand whether architectural differences put JointVAE at a disadvantage, we trained JointVAE‡ (Table S1), which uses the same architecture as the one used in cpl-mixVAE. JointVAE‡ uses the same learning procedure as JointVAE, but its convolutional layers are replaced by fully-connected layers (see Supplementary Section J and L for details). The result for JointVAE‡ suggests that the superiority of cpl-mixVAE is not due to the network architecture either. We also examined the performance changes of the proposed 2-arm cpl-mixVAE under three different settings: (i) cpl-mixVAE∗, where coupled networks are not independent and network parameters are shared; (ii) cpl-mixVAEa, where only affine transformations are used for data augmentation; and (iii) cpl-mixVAE(s 6 | c), where the state variable is independent of the discrete variable (Table S1). Our results show that the proposed cpl-mixVAE obtained the best categorical assignments across all training settings. We also examined the accuracy of categorical assignments for the cpl-mixVAE model, under different dimensions of discrete latent variable, for both MNIST and scRNA-seq datasets (see Supplementary Section J). We experimentally observe that while JointVAE suffers from sensitivity to empirical choices of |c|, cpl-mixVAE is more robust in encoding the discrete variability, without suffering from mode collapse (Fig. S9 and Fig. S10).
5 CONCLUSION
We have proposed cpl-mixVAE as a multi-arm framework to apply the power of collective decision making in unsupervised joint representation learning of discrete and continuous factors, scalable to the high-dimensional discrete space. This framework utilizes multiple pairwise-coupled autoencoding arms with a shared categorical variable, while independently learning the continuous variables. Our experimental results for all datasets support the theoretical findings, and show that cpl-mixVAE outperforms comparable models. Importantly, for challenging scRNA-seq dataset, we showed that the proposed framework identifies biologically interpretable cell types and differentiate between type-dependent and activity-regulated genes. | 1. What is the focus of the paper in terms of generative modeling?
2. Can you describe the novel variant of coupled variational autoencoders introduced in the paper?
3. What is the key novelty of the paper regarding the likelihood decoder in the autoencoder?
4. How does the consensus constraint work in avoiding the mode collapse problem?
5. What are the strengths of the paper, particularly in its technical formulation and experimental results? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers the problem of generative modeling for mixed discrete-continuous data. To this end, it introduces a novel variant of coupled variational autoencoders. Specifically, they reformulate the coupled variational autoencoder paradigm in such a way that, instead of each "arm" of the autoencoder modeling a different modality, the arms model different chunks of the same modality (which comprises a high-dimensional discrete part, in addition to a continuous part).
The key novelty of the paper is the idea of postulating a finite mixture model as the likelihood (decoder) pertaining to each arm of the autoencoder, and stipulating that all arms essentially infer similar categorical posteriors of mixture component assignment for the different chunks pertaining to the same observation. This consensus constraint is enforced on the grounds of the Aitchison geometry in the probability simplex, which avoids the mode collapse problem.
Review
Strengths: The idea is quiet novel and smart. The technical formulation is appropriate, and the constraint enforcing mixture component assignment consensus is sound. The experimental results are convincing, as they include a number of benchmarks, including some challenging ones, comparison to some SOTA methods, and an ablation study that offers good insights into how and why the method works.
Weaknesses: I did not find any commendable weakness in the paper. |
ICLR | Title
Mixture Representation Learning with Coupled Autoencoders
Abstract
Latent representations help unravel complex phenomena. While continuous latent variables can be efficiently inferred, fitting mixed discrete-continuous models remains challenging despite recent progress, especially when the discrete factor dimensionality is large. A pressing application for such mixture representations is the analysis of single-cell omic datasets to understand neuronal diversity and its molecular underpinnings. Here, we propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE that significantly outperforms state-of-the-art in high-dimensional discrete settings. cpl-mixVAE introduces a consensus constraint on discrete factors of variability across the networks, which regularizes the mixture representations at the time of training. We justify the use of this framework with theoretical results and validate it with experiments on benchmark datasets. We demonstrate that our approach discovers interpretable discrete and continuous variables describing neuronal identity in two single-cell RNA sequencing datasets, each profiling over a hundred cortical neuron types.
1 INTRODUCTION
Fitting mixed discrete-continuous models arises in many contexts. While continuous latent variables can be efficiently inferred with variational and adversarial formulations, inference of continuous and discrete factors in generalized mixture models remains challenging despite recent progress (Jang et al., 2016; Chen et al., 2016; Dupont, 2018; Jeong & Song, 2019). A pressing domain of application for such models is quantifying factors of biological variability in single-cell omic studies. The high-throughput and high-dimensional datasets produced by these studies document a previously unappreciated diversity of gene expression. In neuroscience, this poses the identification of cell types and cell states as a key research area to understand how neuronal circuits function (Bargmann et al., 2014), where the notions of cell types and states can be considered as biological interpretations of discrete and continuous variability. While marker gene based studies suggest the existence of more than 100 neuronal cell types in just a single brain region, there is no agreement on such categorization and interpretation of the remaining continuous variability (Seung & Sümbül, 2014; Zeng & Sanes, 2017; Tasic et al., 2018). Moreover, existing unsupervised, joint continuous-discrete learning methods are tailored for problems with relatively few and equally-abundant discrete components, and accurate inference remains out of reach for these applications.
Deep generative models have previously been applied to single-cell datasets, where the focus is on the cluster identity and it is typically inferred by post-hoc analysis of a continuous factor (Lopez et al., 2018). Deep Gaussian mixture models (Dilokthanakul et al., 2016; Johnson et al., 2016; Jiang et al., 2017) also focus on the identification of categories and do not take interpretability of the remaining continuous variability into account. To address the need for joint inference of interpretable discrete and continuous factors, various adversarial and variational methods have been proposed. While existing adversarial generative models, e.g. InfoGAN (Chen et al., 2016), are susceptible to stability issues (Higgins et al., 2017; Kim & Mnih, 2018), variational autoencoders (VAEs) (Kingma & Welling, 2013) emerge as efficient and more stable alternatives (Tschannen et al., 2018; Zhang et al., 2018; Dupont, 2018; Jeong & Song, 2019). VAE-based approaches approximate the mixture model by assuming a family of distributions qφ and select the member closest to the true model p. Popular choices in VAE implementations include (1) using KL divergence to compute discrepancy between qφ and p, and (2) using a multivariate Gaussian mixture distribution with uniformly distributed discrete and isotropic Gaussian distributed continuous priors. However, such choices may lead to
underestimating the posterior variance (Minka et al., 2005; Blei et al., 2017). Solutions to resolve this issue are mainly applicable in low-dimensional spaces or for continuous factors alone (Deasy et al., 2020; Kingma et al., 2016; Ranganath et al., 2016; Quiroz et al., 2018).
Inspired by collective decision making, we introduce a variational framework using multiple autoencoding arms to jointly infer interpretable finite discrete (categorical) and continuous factors in the presence of high-dimensional discrete space. Coupled-autoencoders have been previously studied in the context of multi-modal recordings, where each arm learns only a continuous latent representation for one of the data modalities (Feng et al., 2014; Gala et al., 2019; Lee & Pavlovic, 2020). Here, we develop a novel pairwise-coupled autoencoder framework for a single data modality. The proposed framework imposes a consensus constraint on the categorical posterior at the time of training and allows dependencies between continuous and categorical factors. We define the consensus constraint based on the Aitchison geometry in the probability simplex, which avoids the mode collapse problem. We show that the coupled multi-arm architecture enhances accuracy, robustness, and interpretability of the inferred factors without requiring any priors on the relative abundances of categories. Finally, on datasets profiling different cortical regions in the mammalian brain, we show that our method can be used to discover neuronal types as discrete categories and type-specific genes regulating the continuous within-type variability, such as metabolic state or disease state.
Related work. There is an extensive body of research on clustering in mixture models (Dilokthanakul et al., 2016; Jiang et al., 2017; Tian et al., 2017; Guo et al., 2016; Locatello et al., 2018b). The idea of improving the clustering performance through seeking a consensus and co-training and ensembling across multiple observations has been explored in both unsupervised (Monti et al., 2003; Kumar & Daumé, 2011) and semi-supervised contexts (Blum & Mitchell, 1998). However, these methods do not consider the underlying continuous variabilities across observations. Moreover, unlike ensemble methods, which pool the results of different trained workers, autoencoding arms seek a consensus at the time of learning in our framework.
The proposed framework does not need any supervision since the individual arms provide a form of prior or weak supervision for each other. In this regard, our paper is related to a body of work that attempts to improve representation learning by using semi-supervised or group-based settings (Bouchacourt et al., 2017; Hosoya, 2019; Nemeth, 2020). Bouchacourt et al. (2017) demonstrated a multi-level variational autoencoder (MLVAE) as a semi-supervised VAE by revealing that observations within groups share the same type. Hosoya (2019) and Nemeth (2020) attempted to improve MLVAE by imposing a weaker condition to the grouped data. In recent studies (Shu et al., 2019; Locatello et al., 2020), a weakly supervised variational setting has been proposed for disentangled representation learning by providing pairs of observations that share at least one underlying factor. These studies rely on learning latent variables in continuous spaces, and have been applied only to image datasets with low-dimensional latent representations.
Recent advances in structured variational methods, such as imposing a prior (Ranganath et al., 2016) or spatio-temporal dependencies (Quiroz et al., 2018) on the latent distribution parameters, allow for scaling to larger dimensions. However, these solutions are not directly applicable to the discrete space, which will be addressed in our A-arm VAE framework.
2 SINGLE MIXTURE VAE FRAMEWORK
For an observation x ∈ RD, a VAE learns a generative model pθ (x|z) and a variational distribution qφ (z|x), where z ∈ RM is a latent variable with a parameterized distribution p(z) and M D (Kingma & Welling, 2013). Disentangling different sources of variability into different dimensions of z enables an interpretable selection of latent factors (Higgins et al., 2017; Locatello et al., 2018a). However, the interplay between continuous and discrete variabilities present in many real-world datasets is often overlooked by existing methods. This problem can be addressed within the VAE framework in an unsupervised fashion by introducing a categorical latent variable c denoting the class label, alongside the continuous latent variable s. We refer to the continuous variable s as the state or style variable interchangeably. Assuming s and c are independent random variables, the evidence lower bound (ELBO) (Blei et al., 2017) for a single mixture VAE with the distributions parameterized by θ and φ is given by,
L(φ,θ) = Eqφ(s,c|x) [log pθ(x|s, c)] − DKL (qφ(s|x)‖p(s)) − DKL (qφ(c|x)‖p(c)) . (1) Maximizing ELBO in Eq. 1 imposes characteristics on q(s|x) and q(c|x) that can result in underestimation of posterior probabilities such as the mode collapse problem, where the network ignores a
subset of latent variables (Minka et al., 2005; Blei et al., 2017). Recently, VAE-based solutions were proposed by imposing a uniform structure on p(c): akin to β-VAE (Higgins et al., 2017; Burgess et al., 2018), JointVAE (Dupont, 2018) modifies the ELBO by assigning a pair of controlled information capacities for each variational factor, i.e. Cs ∈ R|s| and Cc ∈ R|c|. The main drawback of JointVAE is that its performance is tied to heuristic tuning of |s|× |c| capacities over training iterations so that it is vulnerable to mode collapse in high-dimensional settings. Another recent VAE-based mixture model solution, CascadeVAE (Jeong & Song, 2019), maximizes the ELBO through a semi-gradient-based algorithm by iterating over two separate optimizations for the continuous and categorical variables. While the separation of the optimization steps avoids the mode collapse problem, this separation is valid only when the categorical variable is uniformly distributed. Therefore, its performance strongly depends on the clusters having similar abundances in the dataset. Thus, earlier solutions fall short of learning interpretable mixture representations with high-dimensional discrete variables in real-world applications.
In addition to the issues discussed above, the performance and interpretability of those approaches are further limited by the common assumption that the continuous variable representing the style of the data is independent of the categorical variable. In practice, style often depends on the class label. For instance, even for the well-studied MNIST dataset, the histograms of common digit styles, e.g. “width”, markedly vary for different digits (Supplementary Section I). Moreover, further analysis of the identified continuous factor in the earlier approaches reveals that the independence assumption among q(s|x) and q(c|x) can be significantly violated (see Supplementary Sections H and I).
3 COUPLED MIXTURE VAE FRAMEWORK
The key intuition behind multi-arm networks is cooperation to improve posterior estimation. While the context is different, the popular phrase “wisdom of the crowd” (Surowiecki, 2005) can nevertheless be revealing: when a crowd (multiple arms) needs to make a decision, multiple estimates can increase the expected probability of a correct choice.
3.1 A-ARM VAE FRAMEWORK
We define the A-arm VAE as an A-tuple of independent and architecturally identical autoencoding arms, where the a-th arm parameterizes a mixture model distribution (Fig. 1a). In this framework, individual arms receive a collection of non-identical copies, {xa,xb, . . .} of the given sample, x, belonging to the same category. While each arm has its own mixture representation with potentially non-identical parameters, all arms cooperate to learn q(ca|xa), where ca = cb = · · · , via a cost function at the time of training. Accordingly, a crowd of VAEs with A arms can be formulated as a collection of constrained variational objectives as follows.
max Ls1|c1(φ1,θ1) + · · ·+ LsA|cA(φA,θA) s.t. c1 = · · · = cA
(2)
where Lsa|ca(φa,θa) is the variational loss for arm a,
Lsa|ca(φa,θa) = Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] − Eq(sa|ca,xa) [DKL (q(ca|xa)‖p(ca))] . (3)
In Eq. 3, the variational loss for each arm is defined according to the graphical model in Fig. 1b, which is built upon the traditional ELBO in Eq. 1 by conditioning the continuous state on the categorical variable (derivation in Supplementary Section B). Therefore, learning an interpretable decomposition of the data relies on accurate assignment (inference) of the categorical latent factor. Propositions 1 and 2 below show that the shared categorical assignment inferred from q(c|x1, · · · ,xA), under the
c = c1 = · · · = cA constraint of the multi-arm framework improves the accuracy of the categorical assignment on expectation. Proposition 1. Consider the problem of mixture representation learning in a multi-arm VAE framework. For independent samples from category m, i.e. xi ∼ p(x|m),
Eq(x|m) [log q(c = m|{xi}1:A)] > Eq(x|m) [log q(c = m|{xi}1:B)] s.t. c = c1 = · · · = cA s.t. c = c1 = · · · = cB (4)
if q(m|xi) < 1 and A > B ≥ 1 denote the number of arms. (Proof in Supplementary Section A)
Thus, having more arms increases the expected log posterior for the true categorical latent variable unless it is already at its maximum. Proposition 2. In the A-arm VAE framework, there exists an A that guarantees a true categorical assignment on expectation. That is,
m = arg max c
Eq(x|m) [log q(c|{xi}1:A)] , s.t. c = c1 = · · · = cA . (5)
(Proof in Supplementary Section A) Accordingly, the consensus constraint is sufficient to enhance inference for mixture representations in the A-arm VAE framework. Our theoretical results show that the required number of arms satisfying Eq. 5 is a function of the categorical distribution and the likelihood (Eq. 15, Supplementary Section A). In the particular case of uniformly distributed categories, one pair of coupled arms is enough to satisfy Eq. 5 (see Corollary 1, Supplementary Section A).
We emphasize that the proposed framework does not require any weak supervision as in (Bouchacourt et al., 2017). Instead, it relies on representations that are invariant under non-identical copies of observations. Moreover, unlike (Bouchacourt et al., 2017; Shu et al., 2019; Locatello et al., 2020), the multi-arm framework is not restricted to the continuous space.
Arms observe non-identical copies of samples. In the A-arm VAE framework, arms receive non-identical observations that share the discrete variational factor. To achieve this in a fully unsupervised setting, we use type-preserving data augmentation that generates independent and identically distributed copies of data while preserving its categorical identity. For image datasets, conventional transformations such as rotation, scaling, or translation can serve as type-preserving augmentations. However, for non-image datasets, e.g. single-cell data, we seek a generative model that learns transformations representing within-class variability in an unsupervised manner. To this end, inspired by DAGAN (Antoniou et al., 2017) and VAE-GAN (Larsen et al., 2016), we develop a generative model to provide collections of observations for our multi-arm framework (Supplementary Section F). The proposed generative model learns to generate augmented samples in the vicinity of given samples in the latent space, without knowing their types (Eq. 70). In Supplementary Section A, Remark 2, we further discuss an under-exploration scenario in data augmentation, in which the augmented samples are not independently distributed and are concentrated around the given sample.
3.2 CPL-MIXVAE: PAIRWISE COUPLING IN A-ARM VAE
In the A-arm VAE framework, the mixture representation is obtained through the optimization in Eq. 2. Not only is it challenging to solve the maximization in Eq. 2 due to the equality constraint, but the objective remains a function of p(c) which is unknown, and typically non-uniform. To overcome this, we use an equivalent formulation for Eq. 2 by applying the pairwise coupling paradigm as follows (details of derivation in Supplementary Section C):
max
A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) −∑
a<b
Eq(sa|ca,xa)Eq(sb|cb,xb) [DKL (q(ca|xa)q(cb|xb)‖p(ca, cb))]
s.t. ca = cb ∀a, b ∈ [1, A], a < b (6) We relax the optimization in Eq. 6 into an unconstrained problem by marginalizing the joint distribution over a mismatch measure between categorical variables (see Supplementary Section D):
max A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) +∑
a<b
H(ca|xa) +H(cb|xb)− λEq(ca,cb|xa,xb) [ d2(ca, cb) ] (7)
In Eq. 7, in addition to entropy-based confidence penalties known as mode collapse regularizers (Pereyra et al., 2017), the distance measure d(ca, cb) encourages a consensus on the categorical assignment controlled by λ ≥ 0, the coupling hyperparameter. We refer to the model in Eq. 7 as cpl-mixVAE (Fig. 1a). In cpl-mixVAE, VAE arms try to achieve identical categorical assignments while independently learning their own style variables. In experiments, we set λ = 1 universally. While the bottleneck architecture already encourages interpretable continuous variables, this formulation can be easily extended to include an additional hyperparameter to promote disentanglement of continuous variables as in β-VAE (Higgins et al., 2017). Additional analyses to assess the sensitivity of the cpl-mixVAE’s performance to its coupling factor can be found in the Supplementary Section G.
It may be instructive to cast Eq. 7 in an equivalent constrained optimization form.
Remark 1. The A-arm VAE framework is a collection of constrained variational models as follows:
max A∑ a=1 Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] +H(ca|xa)
s.t. Eq(ca|xa) [ d2(ca, cb) ] < (8)
where denotes the strength of the consensus constraint. Here, cb indicates the assigned category by any one of the arms, b ∈ {1, . . . , A}, imposing structure on the discrete variable to approximate its prior distribution.
Distance between categorical variables. d(ca, cb) denotes the distance between a pair of |c|dimensional un-ordered categorical variables, which are associated with probability vectors with non-negative entries and sum-to-one constraint that form a K-dimensional simplex, where K = |c|. In the real space, a typical choice to compute the distance between two vectors is using Euclidean geometry. However, this geometry is not suitable for probability vectors. Here, we utilize Aitchison geometry (Aitchison, 1982; Egozcue et al., 2003), which defines a vector space on the simplex. Accordingly, the distance in the simplex, i.e. dSK (ca, cb) is defined as dSK (ca, cb) = ‖clr(ca)− clr(cb)‖2, ∀ca, cb ∈ SK , where clr(·) denotes the isometric centered-log-ratio transformation in the simplex. This categorical distance satisfies the conditions of a mathematical metric according to Aitchison geometry.
3.3 SEEKING CONSENSUS IN THE SIMPLEX
An instance of the mode collapse problem (Lucas et al., 2019) manifests itself in the minimization of dSK (ca, cb) (Eq. 7): its trivial local optima encourages the network to abuse the discrete latent factor by ignoring many of the available categories. In the extreme case, the representations can collapse onto a single category; ca = cb = c0. In this scenario, the continuous variable is compelled to act as a primary latent factor, while the model fails to deliver an interpretable mixture representation despite achieving an overall low loss value. To avoid such undesirable local equilibria while training, we add perturbations to the categorical representation of each arm. If posterior probabilities in the simplex have small dispersion, the perturbed distance calculation overstates the discrepancies. Thus, instead of minimizing d2SK (ca, cb), we minimize a perturbed distance
d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
, which corresponds to the distance between additively perturbed ca and cb vectors in Aitchison geometry. Here, σ2ak and σ 2 bk
indicate the mini-batch variances of the k-th category, for arms a and b. We next show that the perturbed distance dσ(·) is bounded by dSK (·) and non-negative values ρu, ρl: Proposition 3. Suppose ca, cb ∈ SK , where SK is a simplex ofK > 0 parts. If dSK (ca, cb) denotes the distance in Aitchison geometry and d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
denotes a perturbed distance, then
d2SK (ca, cb)− ρl ≤ d 2 σ (ca, cb) ≤ d2SK (ca, cb) + ρu
where ρu, ρl ≥ 0, ρu = K ( τ2σu + τ 2 c ) + 2∆στc, ρl = ∆2σ K −Kτ2σl , τc = maxk {log cak − log cbk}, τσu = max k {gk}, τσl = max k {−gk}, ∆σ = ∑ k gk, and gk = (σ−1ak − 1) log cak − (σ −1 bk − 1) log cbk . (Proof in Supplementary Section E)
Thus, when ca and cb are similar and their spread is not small, dσ(ca, cb) closely approximates dSK (ca, cb). Otherwise, it diverges from dSK (·) to avoid mode collapse.
4 EXPERIMENTS
We used four datasets: dSprites, MNIST, and two single-cell RNA sequencing (scRNA-seq) datasets; Smart-seq ALM-VISp (Tasic et al., 2018) and 10X MOp (Yao et al., 2021). Although dSprites and MNIST datasets do not require high-dimensional settings for mixture representation, to facilitate comparisons of cpl-mixVAE with earlier methods, first we report the results for these benchmark datasets. We trained three unsupervised VAE-based methods for mixture modeling: JointVAE (Dupont, 2018), CascadeVAE (Jeong & Song, 2019), and ours (cpl-mixVAE). For MNIST, we additionally trained the popular InfoGAN (Chen et al., 2016) as the most comparable GAN-based model. To show the interpretability of the mixture representations, (i) for the discrete latent factor, we report the accuracy (ACC) of categorical assignments and the DKL(q(c)‖p(c))), (ii) for the continuous variable, we perform latent traversal analysis by fixing the discrete factor and changing the continuous variable according to p(s|c,x). We calculated the accuracy by using minimum weight matching (Kuhn, 1955) to match the categorical variables obtained by cpl-mixVAE with the available cluster labels for each dataset. Additionally, we report the computational efficiency (number of iterations per second) to compare the training complexity of the multi-arm framework against earlier methods (Table 1). All reported numbers for cpl-mixVAE models are average accuracies calculated across arms. In VAE-based models, to sample from q(ca|xa), we use the Gumbel-softmax distribution (Jang et al., 2016; Maddison et al., 2014). In cpl-mixVAE, each arm received an augmented copy of the original input generated by the deep generative augmenter (Supplementary Section F) during training. Details of the network architectures and training settings can be found in Supplementary Section L.
4.1 BENCHMARK DATASETS
dSprites. dSprites is procedurally generated from discrete (3 shapes) and continuous (6 style factors: scale, rotation, and position) latent factors. Based on the uniform distribution of classes, we used a 2-arm cpl-mixVAE with |c| = 3 and |s| = 6 to learn interpretable representations. Results in Table 1 show that our method outperforms the other methods in terms of categorical assignment accuracy. In addition to demonstrating the traversal results (Fig. 2, bottom row), we report disentanglement scores (DS in Table 1). Even though the continuous factors do not depend on the discrete factors in this synthetic dataset, we did not change the architecture and expected the network to infer this independence. For a fair comparison, we used the same disentanglement metric implemented for CascadeVAE (Jeong & Song, 2019).
MNIST. Similarly, due to the uniform distribution of digit labels in MNIST, we again used a 2-arm cpl-mixVAE model. Following the convention (Dupont, 2018; Jeong & Song, 2019; Bouchacourt
et al., 2017), each arm of cpl-mixVAE uses a 10-dimensional categorical variable representing digits (type), and a 10-dimensional continuous random variable representing the writing style (state). Table 1 displays the accuracy of the categorical assignment and the discrepancy between q(c) and p(c) for InfoGAN, two 1-arm VAE methods (JointVAE and CascadeVAE), and cpl-mixVAE with 2 arms. Additionally, to isolate the impact of data augmentation in training, we trained JointVAE† and CascadeVAE† where the models were trained with the same augmented copies of the original MNIST dataset as cpl-mixVAE. The results in Table 1 suggest that data augmentation by itself does not enhance the performance. Fig. 2 (top row) illustrates the continuous latent traversals for four dimensions of the state variable inferred by cpl-mixVAE, where each row corresponds to a different dimension of the categorical variable, and the state variable monotonically changes across columns. Both results in Table 1 and Fig. 2 show that cpl-mixVAE achieved an interpretable mixture representation with the highest categorical assignment accuracy.
Summary. cpl-mixVAE improves the discrete density approximation and infers better mixture representations. It outperforms earlier methods, without using extraneous optimization or heuristic channel capacities. Beyond performance and robustness, its computational cost is also comparable to that of the baselines.
4.2 SINGLE-CELL RNA SEQUENCING DATA
In this dataset the observations are individual cells and each observation consists of expressions of thousands of genes. Here, we used two scRNA-seq datasets: (i) Smart-seq ALM-VISp (Tasic et al., 2018) and (ii) 10X MOp (Yao et al., 2021). The Smart-seq dataset includes transcriptomic profiles of more than 10, 000 genes for ∼ 22, 000 cells from the mouse anterior lateral motor cortex (ALM) and the primary visual cortex (VIPs). The 10X MOp dataset profiles ∼ 123000 cells in the mouse primary motor cortex (MOp) with the droplet-based 10X Genomics Chromium platform. 10X-based data often display more gene dropouts, especially for genes with lower expression levels. scRNA-seq datasets are significantly more complex than typical benchmark datasets due to (i) large number of cell types (discrete variable), and (ii) class imbalance; in the 10X MOp dataset, for instance, the mostand the least-abundant cell types include 17, 000 and 20 samples, respectively. Moreover, whether the observed diversity corresponds to discrete variability or a continuum is an ongoing debate in neuroscience (Scala et al., 2020). While using genes that are differentially expressed in subsets of cells, known as marker genes (MGs) (Trapnell, 2015) is a common approach to define cell types, the identified genes rarely obey the idealized MG definition in practice. Here, we focus on neuronal cells and use a subset of 5, 000 highest variance genes. The original MG-based studies for each dataset suggested 115 (Smart-seq data) (Tasic et al., 2018) and 140 (10X-based data) (Yao et al., 2021) discrete neuronal types.
Neuron type identification. Based on the suggested taxonomies in (Tasic et al., 2018; Yao et al., 2021), for the Smart-seq ALM-VISp data, we used 115- and 2-dimensional discrete and continuous variables, and for the 10X MOp data, we used 140- and 2-dimensional discrete and continuous latent variables. We compared the suggested cell types in (Tasic et al., 2018; Yao et al., 2021) with the discrete representations that are inferred from VAE models. Table 1 and Fig. 3(a-b) demonstrate the
performance of a 2-arm cpl-mixVAE model against JointVAE and CascadeVAE. In Fig. 3a.1 and 3b.1, we observe that for both datasets, JointVAE succeeds in identifying (sub)classes of neurons, e.g. excitatory class, or Sst subclass, but not neuronal types at leaf nodes. On the other hand, CascadeVAE learns an almost uniform distribution over all types despite a sizeable difference between the relative abundances of neuronal types (Fig. 3a.2 and 3b.2). Our results in Fig. 3(a-b) clearly show that cplmixVAE outperforms JointVAE and CascadeVAE in identifying meaningful known cell types. The confusion matrices in Fig. 3a.3 and 3b.3 demonstrate that even the inaccurate categorical assignments of cpl-mixVAE are still close to the matrix diagonals, suggesting a small cophenetic distance. That is, those cells are still assigned to nearby cell types in the dendrogram.
Using A > 2. Unlike the discussed benchmark datasets, the neuronal types are not uniformly distributed. Accordingly, we also investigated the accuracy improvement for categorical assignment when more than two arms are used. Fig. 3c illustrates the accuracy improvement with respect to a single autoencoder model, i.e. JointVAE, in agreement with our theoretical findings.
Identifying genes regulating cell activity. To examine the role of the continuous latent variable, we applied a similar traversal analysis to that used for the benchmark datasets. For a given cell sample and its discrete type, we changed each dimension of the continuous variable using the conditional distribution, and inspected gene expression changes caused by continuous variable alterations. Fig. 4 shows the results of the continuous traversal study for JointVAE and cpl-mixVAE, for two excitatory neurons belonging to the “L5 NP” (cell type (I)) and “L6 CT” (cell type (II)) sub-classes in ALM and MOp regions. Note that here, JointVAE is equivalent to a 1-arm VAE, with the exception of the type dependence of the state variable. Since CascadeVAE did not learn meaningful clustering of cells, even at the subclass level, we did not consider it for the continuous factor analysis. In each sub-figure, the latent traversal is color-mapped to normalized reconstructed expression values, where the y-axis corresponds to one dimension of the continuous variable, and the x-axis corresponds to three gene subsets, namely (i) MGs for the two excitatory types, (ii) immediate early genes (IEGs), and (iii) housekeeping gene (HKG) subgroups (Hrvatin et al., 2018; Tarasenko et al., 2017). For cpl-mixVAE (Fig. 4b), the normalized expression of the reported MGs as indicators for excitatory cell types (discrete factors) is unaffected by changes of identified continuous variables. In contrast, for JointVAE (Fig. 4a), we observed that the normalized expression of some MGs (5 out of 10) are changed due to the continuous factor traversal. Additionally, we found that the expression changes inferred by cpl-mixVAE for IEGs and HKGs are essentially monotonically linked to the continuous variable, confirming that the expression of IEGs and HKGs depends strongly on the cell activity variations under different metabolic and environmental conditions. Conversely, JointVAE
fails to reveal such activity-regulated monotonicity for IEGs and HKGs. Furthermore, our results for cpl-mixVAE reveal that the expression of activity-regulated genes depends on the cell type, i.e. IEGs and HKGs respond differently to activation depending on their cell types (compare rows I and II in Fig. 4b). However, in Fig. 4a, since the baseline JointVAE does not take into account the dependency of discrete and continuous factors, it fails to reveal the dependence of activity-regulated expression to the cell type, and therefore produces identical expressions for both types (I) and (II). These findings are consistent over multiple randomly initialized runs (Supplementary Section K.1). See Supplementary Section K.2 for more results on other cell types and gene subsets.
Summary. The cpl-mixVAE model successfully identified the majority of known excitatory and inhibitory neurons in multiple cortical regions. Our findings suggest that cpl-mixVAE, by acknowledging the dependencies of continuous and categorical factors, captures relevant and interpretable continuous variability that can provide insight when deciphering the molecular mechanisms shaping the landscape of biological states, e.g. due to metabolism or disease.
4.3 ABLATION STUDIES
To elucidate the success of the A-arm VAE framework in mixture modeling, we investigate the categorical assignment performance under different training settings. Since CascadeVAE does not learn the categorical factors by variational inference, here we mainly study JointVAE (as a 1-arm VAE) and cpl-mixVAE (as a 2-arm VAE). In Section 4.1, we show that data augmentation by itself does not enhance the categorical assignment (JointVAE†). To understand whether architectural differences put JointVAE at a disadvantage, we trained JointVAE‡ (Table S1), which uses the same architecture as the one used in cpl-mixVAE. JointVAE‡ uses the same learning procedure as JointVAE, but its convolutional layers are replaced by fully-connected layers (see Supplementary Section J and L for details). The result for JointVAE‡ suggests that the superiority of cpl-mixVAE is not due to the network architecture either. We also examined the performance changes of the proposed 2-arm cpl-mixVAE under three different settings: (i) cpl-mixVAE∗, where coupled networks are not independent and network parameters are shared; (ii) cpl-mixVAEa, where only affine transformations are used for data augmentation; and (iii) cpl-mixVAE(s 6 | c), where the state variable is independent of the discrete variable (Table S1). Our results show that the proposed cpl-mixVAE obtained the best categorical assignments across all training settings. We also examined the accuracy of categorical assignments for the cpl-mixVAE model, under different dimensions of discrete latent variable, for both MNIST and scRNA-seq datasets (see Supplementary Section J). We experimentally observe that while JointVAE suffers from sensitivity to empirical choices of |c|, cpl-mixVAE is more robust in encoding the discrete variability, without suffering from mode collapse (Fig. S9 and Fig. S10).
5 CONCLUSION
We have proposed cpl-mixVAE as a multi-arm framework to apply the power of collective decision making in unsupervised joint representation learning of discrete and continuous factors, scalable to the high-dimensional discrete space. This framework utilizes multiple pairwise-coupled autoencoding arms with a shared categorical variable, while independently learning the continuous variables. Our experimental results for all datasets support the theoretical findings, and show that cpl-mixVAE outperforms comparable models. Importantly, for challenging scRNA-seq dataset, we showed that the proposed framework identifies biologically interpretable cell types and differentiate between type-dependent and activity-regulated genes. | 1. What is the focus and contribution of the paper regarding latent variable models?
2. What are the strengths of the proposed approach, particularly in its application to RNA-seq datasets?
3. What are the weaknesses of the paper, especially regarding its motivations, descriptions, and experimentation?
4. Do you have any concerns about the model's applicability and generalizability to real-world datasets?
5. How do you assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper proposes to learn latent variable model with mixed discrete and continuous latent variables. Specifically, the framework utilizes multiple pairwise-coupled autoencoding arms to learn shared categorical variable. The experiments on dSprites, MNIST and RNA-seq datasets verify the effectiveness of the model.
Review
Post-rebuttal: The authors' feedback clarify some of my concerns. but I'm still not so convinced by the experiments. The authors argue the model is not suitable for SVHN/cifar10 due to their complexity and this makes it unclear whether the model is applicable/generalizable for the real-world datasets that are either large scale or present extremely challenging backgrounds. Achieve SOTA ACC is definitely not necessary, but I would assume the proposed model has competitive results, however, the preliminary results shown in appendix seem quite weak. Therefore, I still tend to keep my original rating.
For the paper strength:
-- The latent variable with mixed representation is an important model and worth exploring.
-- Not familiar with the bio-literature, but the experiments and applications on RNA-data look interesting.
For weakness:
-- The motivations and descriptions of the model need to be improved. Some places are unclear and confusing. For example. the paper emphasized that the work is fully unsupervised and proposes the type-preserving data augmentation. However, since each arm should receive samples that share the same underlying categorical factor, does this requirement impose some weak-supervision signal? i.e., we have to ensure that each arm observes the samples from same category, but for fully unsupervised setting, we don't know that. Could authors further elaborate on this point?
-- The experiments seems not sufficient. (a) Since more arms potentially means more encoder/decoder pairs (i.e., more parameters involved), its better to also include the model complexity comparison with the baselines. (b) The scalability of the model is not clear since the experiments are only done on synthetic/gray-scale datasets, what about on svhn or cifar10? For these slightly complicated benchmarks, how does the model perform? How to select the number of arms? Does the model capable dealing with large-scale datasets? |
ICLR | Title
Mixture Representation Learning with Coupled Autoencoders
Abstract
Latent representations help unravel complex phenomena. While continuous latent variables can be efficiently inferred, fitting mixed discrete-continuous models remains challenging despite recent progress, especially when the discrete factor dimensionality is large. A pressing application for such mixture representations is the analysis of single-cell omic datasets to understand neuronal diversity and its molecular underpinnings. Here, we propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE that significantly outperforms state-of-the-art in high-dimensional discrete settings. cpl-mixVAE introduces a consensus constraint on discrete factors of variability across the networks, which regularizes the mixture representations at the time of training. We justify the use of this framework with theoretical results and validate it with experiments on benchmark datasets. We demonstrate that our approach discovers interpretable discrete and continuous variables describing neuronal identity in two single-cell RNA sequencing datasets, each profiling over a hundred cortical neuron types.
1 INTRODUCTION
Fitting mixed discrete-continuous models arises in many contexts. While continuous latent variables can be efficiently inferred with variational and adversarial formulations, inference of continuous and discrete factors in generalized mixture models remains challenging despite recent progress (Jang et al., 2016; Chen et al., 2016; Dupont, 2018; Jeong & Song, 2019). A pressing domain of application for such models is quantifying factors of biological variability in single-cell omic studies. The high-throughput and high-dimensional datasets produced by these studies document a previously unappreciated diversity of gene expression. In neuroscience, this poses the identification of cell types and cell states as a key research area to understand how neuronal circuits function (Bargmann et al., 2014), where the notions of cell types and states can be considered as biological interpretations of discrete and continuous variability. While marker gene based studies suggest the existence of more than 100 neuronal cell types in just a single brain region, there is no agreement on such categorization and interpretation of the remaining continuous variability (Seung & Sümbül, 2014; Zeng & Sanes, 2017; Tasic et al., 2018). Moreover, existing unsupervised, joint continuous-discrete learning methods are tailored for problems with relatively few and equally-abundant discrete components, and accurate inference remains out of reach for these applications.
Deep generative models have previously been applied to single-cell datasets, where the focus is on the cluster identity and it is typically inferred by post-hoc analysis of a continuous factor (Lopez et al., 2018). Deep Gaussian mixture models (Dilokthanakul et al., 2016; Johnson et al., 2016; Jiang et al., 2017) also focus on the identification of categories and do not take interpretability of the remaining continuous variability into account. To address the need for joint inference of interpretable discrete and continuous factors, various adversarial and variational methods have been proposed. While existing adversarial generative models, e.g. InfoGAN (Chen et al., 2016), are susceptible to stability issues (Higgins et al., 2017; Kim & Mnih, 2018), variational autoencoders (VAEs) (Kingma & Welling, 2013) emerge as efficient and more stable alternatives (Tschannen et al., 2018; Zhang et al., 2018; Dupont, 2018; Jeong & Song, 2019). VAE-based approaches approximate the mixture model by assuming a family of distributions qφ and select the member closest to the true model p. Popular choices in VAE implementations include (1) using KL divergence to compute discrepancy between qφ and p, and (2) using a multivariate Gaussian mixture distribution with uniformly distributed discrete and isotropic Gaussian distributed continuous priors. However, such choices may lead to
underestimating the posterior variance (Minka et al., 2005; Blei et al., 2017). Solutions to resolve this issue are mainly applicable in low-dimensional spaces or for continuous factors alone (Deasy et al., 2020; Kingma et al., 2016; Ranganath et al., 2016; Quiroz et al., 2018).
Inspired by collective decision making, we introduce a variational framework using multiple autoencoding arms to jointly infer interpretable finite discrete (categorical) and continuous factors in the presence of high-dimensional discrete space. Coupled-autoencoders have been previously studied in the context of multi-modal recordings, where each arm learns only a continuous latent representation for one of the data modalities (Feng et al., 2014; Gala et al., 2019; Lee & Pavlovic, 2020). Here, we develop a novel pairwise-coupled autoencoder framework for a single data modality. The proposed framework imposes a consensus constraint on the categorical posterior at the time of training and allows dependencies between continuous and categorical factors. We define the consensus constraint based on the Aitchison geometry in the probability simplex, which avoids the mode collapse problem. We show that the coupled multi-arm architecture enhances accuracy, robustness, and interpretability of the inferred factors without requiring any priors on the relative abundances of categories. Finally, on datasets profiling different cortical regions in the mammalian brain, we show that our method can be used to discover neuronal types as discrete categories and type-specific genes regulating the continuous within-type variability, such as metabolic state or disease state.
Related work. There is an extensive body of research on clustering in mixture models (Dilokthanakul et al., 2016; Jiang et al., 2017; Tian et al., 2017; Guo et al., 2016; Locatello et al., 2018b). The idea of improving the clustering performance through seeking a consensus and co-training and ensembling across multiple observations has been explored in both unsupervised (Monti et al., 2003; Kumar & Daumé, 2011) and semi-supervised contexts (Blum & Mitchell, 1998). However, these methods do not consider the underlying continuous variabilities across observations. Moreover, unlike ensemble methods, which pool the results of different trained workers, autoencoding arms seek a consensus at the time of learning in our framework.
The proposed framework does not need any supervision since the individual arms provide a form of prior or weak supervision for each other. In this regard, our paper is related to a body of work that attempts to improve representation learning by using semi-supervised or group-based settings (Bouchacourt et al., 2017; Hosoya, 2019; Nemeth, 2020). Bouchacourt et al. (2017) demonstrated a multi-level variational autoencoder (MLVAE) as a semi-supervised VAE by revealing that observations within groups share the same type. Hosoya (2019) and Nemeth (2020) attempted to improve MLVAE by imposing a weaker condition to the grouped data. In recent studies (Shu et al., 2019; Locatello et al., 2020), a weakly supervised variational setting has been proposed for disentangled representation learning by providing pairs of observations that share at least one underlying factor. These studies rely on learning latent variables in continuous spaces, and have been applied only to image datasets with low-dimensional latent representations.
Recent advances in structured variational methods, such as imposing a prior (Ranganath et al., 2016) or spatio-temporal dependencies (Quiroz et al., 2018) on the latent distribution parameters, allow for scaling to larger dimensions. However, these solutions are not directly applicable to the discrete space, which will be addressed in our A-arm VAE framework.
2 SINGLE MIXTURE VAE FRAMEWORK
For an observation x ∈ RD, a VAE learns a generative model pθ (x|z) and a variational distribution qφ (z|x), where z ∈ RM is a latent variable with a parameterized distribution p(z) and M D (Kingma & Welling, 2013). Disentangling different sources of variability into different dimensions of z enables an interpretable selection of latent factors (Higgins et al., 2017; Locatello et al., 2018a). However, the interplay between continuous and discrete variabilities present in many real-world datasets is often overlooked by existing methods. This problem can be addressed within the VAE framework in an unsupervised fashion by introducing a categorical latent variable c denoting the class label, alongside the continuous latent variable s. We refer to the continuous variable s as the state or style variable interchangeably. Assuming s and c are independent random variables, the evidence lower bound (ELBO) (Blei et al., 2017) for a single mixture VAE with the distributions parameterized by θ and φ is given by,
L(φ,θ) = Eqφ(s,c|x) [log pθ(x|s, c)] − DKL (qφ(s|x)‖p(s)) − DKL (qφ(c|x)‖p(c)) . (1) Maximizing ELBO in Eq. 1 imposes characteristics on q(s|x) and q(c|x) that can result in underestimation of posterior probabilities such as the mode collapse problem, where the network ignores a
subset of latent variables (Minka et al., 2005; Blei et al., 2017). Recently, VAE-based solutions were proposed by imposing a uniform structure on p(c): akin to β-VAE (Higgins et al., 2017; Burgess et al., 2018), JointVAE (Dupont, 2018) modifies the ELBO by assigning a pair of controlled information capacities for each variational factor, i.e. Cs ∈ R|s| and Cc ∈ R|c|. The main drawback of JointVAE is that its performance is tied to heuristic tuning of |s|× |c| capacities over training iterations so that it is vulnerable to mode collapse in high-dimensional settings. Another recent VAE-based mixture model solution, CascadeVAE (Jeong & Song, 2019), maximizes the ELBO through a semi-gradient-based algorithm by iterating over two separate optimizations for the continuous and categorical variables. While the separation of the optimization steps avoids the mode collapse problem, this separation is valid only when the categorical variable is uniformly distributed. Therefore, its performance strongly depends on the clusters having similar abundances in the dataset. Thus, earlier solutions fall short of learning interpretable mixture representations with high-dimensional discrete variables in real-world applications.
In addition to the issues discussed above, the performance and interpretability of those approaches are further limited by the common assumption that the continuous variable representing the style of the data is independent of the categorical variable. In practice, style often depends on the class label. For instance, even for the well-studied MNIST dataset, the histograms of common digit styles, e.g. “width”, markedly vary for different digits (Supplementary Section I). Moreover, further analysis of the identified continuous factor in the earlier approaches reveals that the independence assumption among q(s|x) and q(c|x) can be significantly violated (see Supplementary Sections H and I).
3 COUPLED MIXTURE VAE FRAMEWORK
The key intuition behind multi-arm networks is cooperation to improve posterior estimation. While the context is different, the popular phrase “wisdom of the crowd” (Surowiecki, 2005) can nevertheless be revealing: when a crowd (multiple arms) needs to make a decision, multiple estimates can increase the expected probability of a correct choice.
3.1 A-ARM VAE FRAMEWORK
We define the A-arm VAE as an A-tuple of independent and architecturally identical autoencoding arms, where the a-th arm parameterizes a mixture model distribution (Fig. 1a). In this framework, individual arms receive a collection of non-identical copies, {xa,xb, . . .} of the given sample, x, belonging to the same category. While each arm has its own mixture representation with potentially non-identical parameters, all arms cooperate to learn q(ca|xa), where ca = cb = · · · , via a cost function at the time of training. Accordingly, a crowd of VAEs with A arms can be formulated as a collection of constrained variational objectives as follows.
max Ls1|c1(φ1,θ1) + · · ·+ LsA|cA(φA,θA) s.t. c1 = · · · = cA
(2)
where Lsa|ca(φa,θa) is the variational loss for arm a,
Lsa|ca(φa,θa) = Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] − Eq(sa|ca,xa) [DKL (q(ca|xa)‖p(ca))] . (3)
In Eq. 3, the variational loss for each arm is defined according to the graphical model in Fig. 1b, which is built upon the traditional ELBO in Eq. 1 by conditioning the continuous state on the categorical variable (derivation in Supplementary Section B). Therefore, learning an interpretable decomposition of the data relies on accurate assignment (inference) of the categorical latent factor. Propositions 1 and 2 below show that the shared categorical assignment inferred from q(c|x1, · · · ,xA), under the
c = c1 = · · · = cA constraint of the multi-arm framework improves the accuracy of the categorical assignment on expectation. Proposition 1. Consider the problem of mixture representation learning in a multi-arm VAE framework. For independent samples from category m, i.e. xi ∼ p(x|m),
Eq(x|m) [log q(c = m|{xi}1:A)] > Eq(x|m) [log q(c = m|{xi}1:B)] s.t. c = c1 = · · · = cA s.t. c = c1 = · · · = cB (4)
if q(m|xi) < 1 and A > B ≥ 1 denote the number of arms. (Proof in Supplementary Section A)
Thus, having more arms increases the expected log posterior for the true categorical latent variable unless it is already at its maximum. Proposition 2. In the A-arm VAE framework, there exists an A that guarantees a true categorical assignment on expectation. That is,
m = arg max c
Eq(x|m) [log q(c|{xi}1:A)] , s.t. c = c1 = · · · = cA . (5)
(Proof in Supplementary Section A) Accordingly, the consensus constraint is sufficient to enhance inference for mixture representations in the A-arm VAE framework. Our theoretical results show that the required number of arms satisfying Eq. 5 is a function of the categorical distribution and the likelihood (Eq. 15, Supplementary Section A). In the particular case of uniformly distributed categories, one pair of coupled arms is enough to satisfy Eq. 5 (see Corollary 1, Supplementary Section A).
We emphasize that the proposed framework does not require any weak supervision as in (Bouchacourt et al., 2017). Instead, it relies on representations that are invariant under non-identical copies of observations. Moreover, unlike (Bouchacourt et al., 2017; Shu et al., 2019; Locatello et al., 2020), the multi-arm framework is not restricted to the continuous space.
Arms observe non-identical copies of samples. In the A-arm VAE framework, arms receive non-identical observations that share the discrete variational factor. To achieve this in a fully unsupervised setting, we use type-preserving data augmentation that generates independent and identically distributed copies of data while preserving its categorical identity. For image datasets, conventional transformations such as rotation, scaling, or translation can serve as type-preserving augmentations. However, for non-image datasets, e.g. single-cell data, we seek a generative model that learns transformations representing within-class variability in an unsupervised manner. To this end, inspired by DAGAN (Antoniou et al., 2017) and VAE-GAN (Larsen et al., 2016), we develop a generative model to provide collections of observations for our multi-arm framework (Supplementary Section F). The proposed generative model learns to generate augmented samples in the vicinity of given samples in the latent space, without knowing their types (Eq. 70). In Supplementary Section A, Remark 2, we further discuss an under-exploration scenario in data augmentation, in which the augmented samples are not independently distributed and are concentrated around the given sample.
3.2 CPL-MIXVAE: PAIRWISE COUPLING IN A-ARM VAE
In the A-arm VAE framework, the mixture representation is obtained through the optimization in Eq. 2. Not only is it challenging to solve the maximization in Eq. 2 due to the equality constraint, but the objective remains a function of p(c) which is unknown, and typically non-uniform. To overcome this, we use an equivalent formulation for Eq. 2 by applying the pairwise coupling paradigm as follows (details of derivation in Supplementary Section C):
max
A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) −∑
a<b
Eq(sa|ca,xa)Eq(sb|cb,xb) [DKL (q(ca|xa)q(cb|xb)‖p(ca, cb))]
s.t. ca = cb ∀a, b ∈ [1, A], a < b (6) We relax the optimization in Eq. 6 into an unconstrained problem by marginalizing the joint distribution over a mismatch measure between categorical variables (see Supplementary Section D):
max A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) +∑
a<b
H(ca|xa) +H(cb|xb)− λEq(ca,cb|xa,xb) [ d2(ca, cb) ] (7)
In Eq. 7, in addition to entropy-based confidence penalties known as mode collapse regularizers (Pereyra et al., 2017), the distance measure d(ca, cb) encourages a consensus on the categorical assignment controlled by λ ≥ 0, the coupling hyperparameter. We refer to the model in Eq. 7 as cpl-mixVAE (Fig. 1a). In cpl-mixVAE, VAE arms try to achieve identical categorical assignments while independently learning their own style variables. In experiments, we set λ = 1 universally. While the bottleneck architecture already encourages interpretable continuous variables, this formulation can be easily extended to include an additional hyperparameter to promote disentanglement of continuous variables as in β-VAE (Higgins et al., 2017). Additional analyses to assess the sensitivity of the cpl-mixVAE’s performance to its coupling factor can be found in the Supplementary Section G.
It may be instructive to cast Eq. 7 in an equivalent constrained optimization form.
Remark 1. The A-arm VAE framework is a collection of constrained variational models as follows:
max A∑ a=1 Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] +H(ca|xa)
s.t. Eq(ca|xa) [ d2(ca, cb) ] < (8)
where denotes the strength of the consensus constraint. Here, cb indicates the assigned category by any one of the arms, b ∈ {1, . . . , A}, imposing structure on the discrete variable to approximate its prior distribution.
Distance between categorical variables. d(ca, cb) denotes the distance between a pair of |c|dimensional un-ordered categorical variables, which are associated with probability vectors with non-negative entries and sum-to-one constraint that form a K-dimensional simplex, where K = |c|. In the real space, a typical choice to compute the distance between two vectors is using Euclidean geometry. However, this geometry is not suitable for probability vectors. Here, we utilize Aitchison geometry (Aitchison, 1982; Egozcue et al., 2003), which defines a vector space on the simplex. Accordingly, the distance in the simplex, i.e. dSK (ca, cb) is defined as dSK (ca, cb) = ‖clr(ca)− clr(cb)‖2, ∀ca, cb ∈ SK , where clr(·) denotes the isometric centered-log-ratio transformation in the simplex. This categorical distance satisfies the conditions of a mathematical metric according to Aitchison geometry.
3.3 SEEKING CONSENSUS IN THE SIMPLEX
An instance of the mode collapse problem (Lucas et al., 2019) manifests itself in the minimization of dSK (ca, cb) (Eq. 7): its trivial local optima encourages the network to abuse the discrete latent factor by ignoring many of the available categories. In the extreme case, the representations can collapse onto a single category; ca = cb = c0. In this scenario, the continuous variable is compelled to act as a primary latent factor, while the model fails to deliver an interpretable mixture representation despite achieving an overall low loss value. To avoid such undesirable local equilibria while training, we add perturbations to the categorical representation of each arm. If posterior probabilities in the simplex have small dispersion, the perturbed distance calculation overstates the discrepancies. Thus, instead of minimizing d2SK (ca, cb), we minimize a perturbed distance
d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
, which corresponds to the distance between additively perturbed ca and cb vectors in Aitchison geometry. Here, σ2ak and σ 2 bk
indicate the mini-batch variances of the k-th category, for arms a and b. We next show that the perturbed distance dσ(·) is bounded by dSK (·) and non-negative values ρu, ρl: Proposition 3. Suppose ca, cb ∈ SK , where SK is a simplex ofK > 0 parts. If dSK (ca, cb) denotes the distance in Aitchison geometry and d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
denotes a perturbed distance, then
d2SK (ca, cb)− ρl ≤ d 2 σ (ca, cb) ≤ d2SK (ca, cb) + ρu
where ρu, ρl ≥ 0, ρu = K ( τ2σu + τ 2 c ) + 2∆στc, ρl = ∆2σ K −Kτ2σl , τc = maxk {log cak − log cbk}, τσu = max k {gk}, τσl = max k {−gk}, ∆σ = ∑ k gk, and gk = (σ−1ak − 1) log cak − (σ −1 bk − 1) log cbk . (Proof in Supplementary Section E)
Thus, when ca and cb are similar and their spread is not small, dσ(ca, cb) closely approximates dSK (ca, cb). Otherwise, it diverges from dSK (·) to avoid mode collapse.
4 EXPERIMENTS
We used four datasets: dSprites, MNIST, and two single-cell RNA sequencing (scRNA-seq) datasets; Smart-seq ALM-VISp (Tasic et al., 2018) and 10X MOp (Yao et al., 2021). Although dSprites and MNIST datasets do not require high-dimensional settings for mixture representation, to facilitate comparisons of cpl-mixVAE with earlier methods, first we report the results for these benchmark datasets. We trained three unsupervised VAE-based methods for mixture modeling: JointVAE (Dupont, 2018), CascadeVAE (Jeong & Song, 2019), and ours (cpl-mixVAE). For MNIST, we additionally trained the popular InfoGAN (Chen et al., 2016) as the most comparable GAN-based model. To show the interpretability of the mixture representations, (i) for the discrete latent factor, we report the accuracy (ACC) of categorical assignments and the DKL(q(c)‖p(c))), (ii) for the continuous variable, we perform latent traversal analysis by fixing the discrete factor and changing the continuous variable according to p(s|c,x). We calculated the accuracy by using minimum weight matching (Kuhn, 1955) to match the categorical variables obtained by cpl-mixVAE with the available cluster labels for each dataset. Additionally, we report the computational efficiency (number of iterations per second) to compare the training complexity of the multi-arm framework against earlier methods (Table 1). All reported numbers for cpl-mixVAE models are average accuracies calculated across arms. In VAE-based models, to sample from q(ca|xa), we use the Gumbel-softmax distribution (Jang et al., 2016; Maddison et al., 2014). In cpl-mixVAE, each arm received an augmented copy of the original input generated by the deep generative augmenter (Supplementary Section F) during training. Details of the network architectures and training settings can be found in Supplementary Section L.
4.1 BENCHMARK DATASETS
dSprites. dSprites is procedurally generated from discrete (3 shapes) and continuous (6 style factors: scale, rotation, and position) latent factors. Based on the uniform distribution of classes, we used a 2-arm cpl-mixVAE with |c| = 3 and |s| = 6 to learn interpretable representations. Results in Table 1 show that our method outperforms the other methods in terms of categorical assignment accuracy. In addition to demonstrating the traversal results (Fig. 2, bottom row), we report disentanglement scores (DS in Table 1). Even though the continuous factors do not depend on the discrete factors in this synthetic dataset, we did not change the architecture and expected the network to infer this independence. For a fair comparison, we used the same disentanglement metric implemented for CascadeVAE (Jeong & Song, 2019).
MNIST. Similarly, due to the uniform distribution of digit labels in MNIST, we again used a 2-arm cpl-mixVAE model. Following the convention (Dupont, 2018; Jeong & Song, 2019; Bouchacourt
et al., 2017), each arm of cpl-mixVAE uses a 10-dimensional categorical variable representing digits (type), and a 10-dimensional continuous random variable representing the writing style (state). Table 1 displays the accuracy of the categorical assignment and the discrepancy between q(c) and p(c) for InfoGAN, two 1-arm VAE methods (JointVAE and CascadeVAE), and cpl-mixVAE with 2 arms. Additionally, to isolate the impact of data augmentation in training, we trained JointVAE† and CascadeVAE† where the models were trained with the same augmented copies of the original MNIST dataset as cpl-mixVAE. The results in Table 1 suggest that data augmentation by itself does not enhance the performance. Fig. 2 (top row) illustrates the continuous latent traversals for four dimensions of the state variable inferred by cpl-mixVAE, where each row corresponds to a different dimension of the categorical variable, and the state variable monotonically changes across columns. Both results in Table 1 and Fig. 2 show that cpl-mixVAE achieved an interpretable mixture representation with the highest categorical assignment accuracy.
Summary. cpl-mixVAE improves the discrete density approximation and infers better mixture representations. It outperforms earlier methods, without using extraneous optimization or heuristic channel capacities. Beyond performance and robustness, its computational cost is also comparable to that of the baselines.
4.2 SINGLE-CELL RNA SEQUENCING DATA
In this dataset the observations are individual cells and each observation consists of expressions of thousands of genes. Here, we used two scRNA-seq datasets: (i) Smart-seq ALM-VISp (Tasic et al., 2018) and (ii) 10X MOp (Yao et al., 2021). The Smart-seq dataset includes transcriptomic profiles of more than 10, 000 genes for ∼ 22, 000 cells from the mouse anterior lateral motor cortex (ALM) and the primary visual cortex (VIPs). The 10X MOp dataset profiles ∼ 123000 cells in the mouse primary motor cortex (MOp) with the droplet-based 10X Genomics Chromium platform. 10X-based data often display more gene dropouts, especially for genes with lower expression levels. scRNA-seq datasets are significantly more complex than typical benchmark datasets due to (i) large number of cell types (discrete variable), and (ii) class imbalance; in the 10X MOp dataset, for instance, the mostand the least-abundant cell types include 17, 000 and 20 samples, respectively. Moreover, whether the observed diversity corresponds to discrete variability or a continuum is an ongoing debate in neuroscience (Scala et al., 2020). While using genes that are differentially expressed in subsets of cells, known as marker genes (MGs) (Trapnell, 2015) is a common approach to define cell types, the identified genes rarely obey the idealized MG definition in practice. Here, we focus on neuronal cells and use a subset of 5, 000 highest variance genes. The original MG-based studies for each dataset suggested 115 (Smart-seq data) (Tasic et al., 2018) and 140 (10X-based data) (Yao et al., 2021) discrete neuronal types.
Neuron type identification. Based on the suggested taxonomies in (Tasic et al., 2018; Yao et al., 2021), for the Smart-seq ALM-VISp data, we used 115- and 2-dimensional discrete and continuous variables, and for the 10X MOp data, we used 140- and 2-dimensional discrete and continuous latent variables. We compared the suggested cell types in (Tasic et al., 2018; Yao et al., 2021) with the discrete representations that are inferred from VAE models. Table 1 and Fig. 3(a-b) demonstrate the
performance of a 2-arm cpl-mixVAE model against JointVAE and CascadeVAE. In Fig. 3a.1 and 3b.1, we observe that for both datasets, JointVAE succeeds in identifying (sub)classes of neurons, e.g. excitatory class, or Sst subclass, but not neuronal types at leaf nodes. On the other hand, CascadeVAE learns an almost uniform distribution over all types despite a sizeable difference between the relative abundances of neuronal types (Fig. 3a.2 and 3b.2). Our results in Fig. 3(a-b) clearly show that cplmixVAE outperforms JointVAE and CascadeVAE in identifying meaningful known cell types. The confusion matrices in Fig. 3a.3 and 3b.3 demonstrate that even the inaccurate categorical assignments of cpl-mixVAE are still close to the matrix diagonals, suggesting a small cophenetic distance. That is, those cells are still assigned to nearby cell types in the dendrogram.
Using A > 2. Unlike the discussed benchmark datasets, the neuronal types are not uniformly distributed. Accordingly, we also investigated the accuracy improvement for categorical assignment when more than two arms are used. Fig. 3c illustrates the accuracy improvement with respect to a single autoencoder model, i.e. JointVAE, in agreement with our theoretical findings.
Identifying genes regulating cell activity. To examine the role of the continuous latent variable, we applied a similar traversal analysis to that used for the benchmark datasets. For a given cell sample and its discrete type, we changed each dimension of the continuous variable using the conditional distribution, and inspected gene expression changes caused by continuous variable alterations. Fig. 4 shows the results of the continuous traversal study for JointVAE and cpl-mixVAE, for two excitatory neurons belonging to the “L5 NP” (cell type (I)) and “L6 CT” (cell type (II)) sub-classes in ALM and MOp regions. Note that here, JointVAE is equivalent to a 1-arm VAE, with the exception of the type dependence of the state variable. Since CascadeVAE did not learn meaningful clustering of cells, even at the subclass level, we did not consider it for the continuous factor analysis. In each sub-figure, the latent traversal is color-mapped to normalized reconstructed expression values, where the y-axis corresponds to one dimension of the continuous variable, and the x-axis corresponds to three gene subsets, namely (i) MGs for the two excitatory types, (ii) immediate early genes (IEGs), and (iii) housekeeping gene (HKG) subgroups (Hrvatin et al., 2018; Tarasenko et al., 2017). For cpl-mixVAE (Fig. 4b), the normalized expression of the reported MGs as indicators for excitatory cell types (discrete factors) is unaffected by changes of identified continuous variables. In contrast, for JointVAE (Fig. 4a), we observed that the normalized expression of some MGs (5 out of 10) are changed due to the continuous factor traversal. Additionally, we found that the expression changes inferred by cpl-mixVAE for IEGs and HKGs are essentially monotonically linked to the continuous variable, confirming that the expression of IEGs and HKGs depends strongly on the cell activity variations under different metabolic and environmental conditions. Conversely, JointVAE
fails to reveal such activity-regulated monotonicity for IEGs and HKGs. Furthermore, our results for cpl-mixVAE reveal that the expression of activity-regulated genes depends on the cell type, i.e. IEGs and HKGs respond differently to activation depending on their cell types (compare rows I and II in Fig. 4b). However, in Fig. 4a, since the baseline JointVAE does not take into account the dependency of discrete and continuous factors, it fails to reveal the dependence of activity-regulated expression to the cell type, and therefore produces identical expressions for both types (I) and (II). These findings are consistent over multiple randomly initialized runs (Supplementary Section K.1). See Supplementary Section K.2 for more results on other cell types and gene subsets.
Summary. The cpl-mixVAE model successfully identified the majority of known excitatory and inhibitory neurons in multiple cortical regions. Our findings suggest that cpl-mixVAE, by acknowledging the dependencies of continuous and categorical factors, captures relevant and interpretable continuous variability that can provide insight when deciphering the molecular mechanisms shaping the landscape of biological states, e.g. due to metabolism or disease.
4.3 ABLATION STUDIES
To elucidate the success of the A-arm VAE framework in mixture modeling, we investigate the categorical assignment performance under different training settings. Since CascadeVAE does not learn the categorical factors by variational inference, here we mainly study JointVAE (as a 1-arm VAE) and cpl-mixVAE (as a 2-arm VAE). In Section 4.1, we show that data augmentation by itself does not enhance the categorical assignment (JointVAE†). To understand whether architectural differences put JointVAE at a disadvantage, we trained JointVAE‡ (Table S1), which uses the same architecture as the one used in cpl-mixVAE. JointVAE‡ uses the same learning procedure as JointVAE, but its convolutional layers are replaced by fully-connected layers (see Supplementary Section J and L for details). The result for JointVAE‡ suggests that the superiority of cpl-mixVAE is not due to the network architecture either. We also examined the performance changes of the proposed 2-arm cpl-mixVAE under three different settings: (i) cpl-mixVAE∗, where coupled networks are not independent and network parameters are shared; (ii) cpl-mixVAEa, where only affine transformations are used for data augmentation; and (iii) cpl-mixVAE(s 6 | c), where the state variable is independent of the discrete variable (Table S1). Our results show that the proposed cpl-mixVAE obtained the best categorical assignments across all training settings. We also examined the accuracy of categorical assignments for the cpl-mixVAE model, under different dimensions of discrete latent variable, for both MNIST and scRNA-seq datasets (see Supplementary Section J). We experimentally observe that while JointVAE suffers from sensitivity to empirical choices of |c|, cpl-mixVAE is more robust in encoding the discrete variability, without suffering from mode collapse (Fig. S9 and Fig. S10).
5 CONCLUSION
We have proposed cpl-mixVAE as a multi-arm framework to apply the power of collective decision making in unsupervised joint representation learning of discrete and continuous factors, scalable to the high-dimensional discrete space. This framework utilizes multiple pairwise-coupled autoencoding arms with a shared categorical variable, while independently learning the continuous variables. Our experimental results for all datasets support the theoretical findings, and show that cpl-mixVAE outperforms comparable models. Importantly, for challenging scRNA-seq dataset, we showed that the proposed framework identifies biologically interpretable cell types and differentiate between type-dependent and activity-regulated genes. | 1. What is the main contribution of the paper, and how does it improve clustering models?
2. How does the proposed approach differ from supervised methods, and what are the benefits of using this approach over full supervised approaches?
3. How can the method be used in an unsupervised fashion, and what are the implications for model selection?
4. What are the limitations of the theoretical analysis, particularly regarding the requirement of knowing the true number of clusters and the interpretation of the expected log posterior?
5. How does the proposed distance measure in (6) relate to the Aitchison distance used later, and what is its purpose in the context of the paper?
6. Can you provide additional explanation and justification for the ablation study, specifically regarding the impact of the novel distance? | Summary Of The Paper
Review | Summary Of The Paper
This manuscript proposes cpl-mixVAE, which is a novel VAE formulation that attempts to improve clustering models by adapting ideas from consensus clustering. The paper proposes a new model structure and a novel distance to help encourage improved model representations.
Review
Update: I have adjusted my score upwards. The reviewers have address some of my concerns. I still believe that the interplay between the VAE-GAN and the clustering method is not explored well-enough, and that complex interplay can drastically impact the results of the approach.
I have issues with the experiments, justifications, and theory in this manuscript.
First, my major issue with this manuscript is that the proposed approach is not truly an unsupervised method, as is claimed. It uses some supervised information in order to make its decisions. To see this, consider the usage of “type-preserving data augmentation that generates independent and identically distributed copies of data while preserving its categorical identity.” The generation process maintains the label; hence, you are telling the neural network that 2 samples from the same class belong in the same cluster. Thus, because the data augmentation system is using full data distributions, despite not explicitly using the label, I would argue strongly that supervised information is leaking into your system. The reason that this data augmentation does not help JointVAE
†
like it helps cpl-mixVAE is that in the JointVAE the system is not told that these should exactly match, so the supervised information does not leak into the JointVAE system.
As such, if you are going to use existing clusters, I need to understand the benefits of using this approach rather than a full supervised approach before any consideration of acceptance. This is especially true as the quantitative metrics are all supervised evaluation criteria. Since your method is getting some supervised information, it is unsurprising that it does better. At the same time, it does much worse than basic supervised methods, including logistic regression on MNIST.
If you are going to push this as an unsupervised method, you should explain how it could be used in an unsupervised fashion. For example, how well does it represent the data in an unsupervised fashion. Can you use it for model selection?
Theory issues:
Both proposition 1 and proposition 2 require that you know the true
m
to estimate
m
. This is not unsupervised theory, and it is misleading to call it that. Second, the interpretation that the expected log posterior increases with the number of arms is misleading. This method artificially inflates the posterior and isn’t adding information to the system. If you are given a new sample where you don’t know the cluster, and does not improve the correct assignment rate in the Bayes optimal case. This theoretical section needs an increased discussion about these issues.
Minor complaints:
It is confusing to introduce c and s as independent in (1) and then immediately couple them in (3) without discussion. Please refine this transition.
Expectation on the last term in (3) is on the wrong distribution.
It is unclear how the distance between the distribution on c in (6) is related to the Aitchison distance used later. Please provide the relationship.
Additional explanation of 3.3 is necessary. It seems like the primary benefit is to push it away from trivial solutions, and the authors should expand on how they do that.
In the ablation study, I would like to understand the impact of the novel distance better. |
ICLR | Title
Mixture Representation Learning with Coupled Autoencoders
Abstract
Latent representations help unravel complex phenomena. While continuous latent variables can be efficiently inferred, fitting mixed discrete-continuous models remains challenging despite recent progress, especially when the discrete factor dimensionality is large. A pressing application for such mixture representations is the analysis of single-cell omic datasets to understand neuronal diversity and its molecular underpinnings. Here, we propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE that significantly outperforms state-of-the-art in high-dimensional discrete settings. cpl-mixVAE introduces a consensus constraint on discrete factors of variability across the networks, which regularizes the mixture representations at the time of training. We justify the use of this framework with theoretical results and validate it with experiments on benchmark datasets. We demonstrate that our approach discovers interpretable discrete and continuous variables describing neuronal identity in two single-cell RNA sequencing datasets, each profiling over a hundred cortical neuron types.
1 INTRODUCTION
Fitting mixed discrete-continuous models arises in many contexts. While continuous latent variables can be efficiently inferred with variational and adversarial formulations, inference of continuous and discrete factors in generalized mixture models remains challenging despite recent progress (Jang et al., 2016; Chen et al., 2016; Dupont, 2018; Jeong & Song, 2019). A pressing domain of application for such models is quantifying factors of biological variability in single-cell omic studies. The high-throughput and high-dimensional datasets produced by these studies document a previously unappreciated diversity of gene expression. In neuroscience, this poses the identification of cell types and cell states as a key research area to understand how neuronal circuits function (Bargmann et al., 2014), where the notions of cell types and states can be considered as biological interpretations of discrete and continuous variability. While marker gene based studies suggest the existence of more than 100 neuronal cell types in just a single brain region, there is no agreement on such categorization and interpretation of the remaining continuous variability (Seung & Sümbül, 2014; Zeng & Sanes, 2017; Tasic et al., 2018). Moreover, existing unsupervised, joint continuous-discrete learning methods are tailored for problems with relatively few and equally-abundant discrete components, and accurate inference remains out of reach for these applications.
Deep generative models have previously been applied to single-cell datasets, where the focus is on the cluster identity and it is typically inferred by post-hoc analysis of a continuous factor (Lopez et al., 2018). Deep Gaussian mixture models (Dilokthanakul et al., 2016; Johnson et al., 2016; Jiang et al., 2017) also focus on the identification of categories and do not take interpretability of the remaining continuous variability into account. To address the need for joint inference of interpretable discrete and continuous factors, various adversarial and variational methods have been proposed. While existing adversarial generative models, e.g. InfoGAN (Chen et al., 2016), are susceptible to stability issues (Higgins et al., 2017; Kim & Mnih, 2018), variational autoencoders (VAEs) (Kingma & Welling, 2013) emerge as efficient and more stable alternatives (Tschannen et al., 2018; Zhang et al., 2018; Dupont, 2018; Jeong & Song, 2019). VAE-based approaches approximate the mixture model by assuming a family of distributions qφ and select the member closest to the true model p. Popular choices in VAE implementations include (1) using KL divergence to compute discrepancy between qφ and p, and (2) using a multivariate Gaussian mixture distribution with uniformly distributed discrete and isotropic Gaussian distributed continuous priors. However, such choices may lead to
underestimating the posterior variance (Minka et al., 2005; Blei et al., 2017). Solutions to resolve this issue are mainly applicable in low-dimensional spaces or for continuous factors alone (Deasy et al., 2020; Kingma et al., 2016; Ranganath et al., 2016; Quiroz et al., 2018).
Inspired by collective decision making, we introduce a variational framework using multiple autoencoding arms to jointly infer interpretable finite discrete (categorical) and continuous factors in the presence of high-dimensional discrete space. Coupled-autoencoders have been previously studied in the context of multi-modal recordings, where each arm learns only a continuous latent representation for one of the data modalities (Feng et al., 2014; Gala et al., 2019; Lee & Pavlovic, 2020). Here, we develop a novel pairwise-coupled autoencoder framework for a single data modality. The proposed framework imposes a consensus constraint on the categorical posterior at the time of training and allows dependencies between continuous and categorical factors. We define the consensus constraint based on the Aitchison geometry in the probability simplex, which avoids the mode collapse problem. We show that the coupled multi-arm architecture enhances accuracy, robustness, and interpretability of the inferred factors without requiring any priors on the relative abundances of categories. Finally, on datasets profiling different cortical regions in the mammalian brain, we show that our method can be used to discover neuronal types as discrete categories and type-specific genes regulating the continuous within-type variability, such as metabolic state or disease state.
Related work. There is an extensive body of research on clustering in mixture models (Dilokthanakul et al., 2016; Jiang et al., 2017; Tian et al., 2017; Guo et al., 2016; Locatello et al., 2018b). The idea of improving the clustering performance through seeking a consensus and co-training and ensembling across multiple observations has been explored in both unsupervised (Monti et al., 2003; Kumar & Daumé, 2011) and semi-supervised contexts (Blum & Mitchell, 1998). However, these methods do not consider the underlying continuous variabilities across observations. Moreover, unlike ensemble methods, which pool the results of different trained workers, autoencoding arms seek a consensus at the time of learning in our framework.
The proposed framework does not need any supervision since the individual arms provide a form of prior or weak supervision for each other. In this regard, our paper is related to a body of work that attempts to improve representation learning by using semi-supervised or group-based settings (Bouchacourt et al., 2017; Hosoya, 2019; Nemeth, 2020). Bouchacourt et al. (2017) demonstrated a multi-level variational autoencoder (MLVAE) as a semi-supervised VAE by revealing that observations within groups share the same type. Hosoya (2019) and Nemeth (2020) attempted to improve MLVAE by imposing a weaker condition to the grouped data. In recent studies (Shu et al., 2019; Locatello et al., 2020), a weakly supervised variational setting has been proposed for disentangled representation learning by providing pairs of observations that share at least one underlying factor. These studies rely on learning latent variables in continuous spaces, and have been applied only to image datasets with low-dimensional latent representations.
Recent advances in structured variational methods, such as imposing a prior (Ranganath et al., 2016) or spatio-temporal dependencies (Quiroz et al., 2018) on the latent distribution parameters, allow for scaling to larger dimensions. However, these solutions are not directly applicable to the discrete space, which will be addressed in our A-arm VAE framework.
2 SINGLE MIXTURE VAE FRAMEWORK
For an observation x ∈ RD, a VAE learns a generative model pθ (x|z) and a variational distribution qφ (z|x), where z ∈ RM is a latent variable with a parameterized distribution p(z) and M D (Kingma & Welling, 2013). Disentangling different sources of variability into different dimensions of z enables an interpretable selection of latent factors (Higgins et al., 2017; Locatello et al., 2018a). However, the interplay between continuous and discrete variabilities present in many real-world datasets is often overlooked by existing methods. This problem can be addressed within the VAE framework in an unsupervised fashion by introducing a categorical latent variable c denoting the class label, alongside the continuous latent variable s. We refer to the continuous variable s as the state or style variable interchangeably. Assuming s and c are independent random variables, the evidence lower bound (ELBO) (Blei et al., 2017) for a single mixture VAE with the distributions parameterized by θ and φ is given by,
L(φ,θ) = Eqφ(s,c|x) [log pθ(x|s, c)] − DKL (qφ(s|x)‖p(s)) − DKL (qφ(c|x)‖p(c)) . (1) Maximizing ELBO in Eq. 1 imposes characteristics on q(s|x) and q(c|x) that can result in underestimation of posterior probabilities such as the mode collapse problem, where the network ignores a
subset of latent variables (Minka et al., 2005; Blei et al., 2017). Recently, VAE-based solutions were proposed by imposing a uniform structure on p(c): akin to β-VAE (Higgins et al., 2017; Burgess et al., 2018), JointVAE (Dupont, 2018) modifies the ELBO by assigning a pair of controlled information capacities for each variational factor, i.e. Cs ∈ R|s| and Cc ∈ R|c|. The main drawback of JointVAE is that its performance is tied to heuristic tuning of |s|× |c| capacities over training iterations so that it is vulnerable to mode collapse in high-dimensional settings. Another recent VAE-based mixture model solution, CascadeVAE (Jeong & Song, 2019), maximizes the ELBO through a semi-gradient-based algorithm by iterating over two separate optimizations for the continuous and categorical variables. While the separation of the optimization steps avoids the mode collapse problem, this separation is valid only when the categorical variable is uniformly distributed. Therefore, its performance strongly depends on the clusters having similar abundances in the dataset. Thus, earlier solutions fall short of learning interpretable mixture representations with high-dimensional discrete variables in real-world applications.
In addition to the issues discussed above, the performance and interpretability of those approaches are further limited by the common assumption that the continuous variable representing the style of the data is independent of the categorical variable. In practice, style often depends on the class label. For instance, even for the well-studied MNIST dataset, the histograms of common digit styles, e.g. “width”, markedly vary for different digits (Supplementary Section I). Moreover, further analysis of the identified continuous factor in the earlier approaches reveals that the independence assumption among q(s|x) and q(c|x) can be significantly violated (see Supplementary Sections H and I).
3 COUPLED MIXTURE VAE FRAMEWORK
The key intuition behind multi-arm networks is cooperation to improve posterior estimation. While the context is different, the popular phrase “wisdom of the crowd” (Surowiecki, 2005) can nevertheless be revealing: when a crowd (multiple arms) needs to make a decision, multiple estimates can increase the expected probability of a correct choice.
3.1 A-ARM VAE FRAMEWORK
We define the A-arm VAE as an A-tuple of independent and architecturally identical autoencoding arms, where the a-th arm parameterizes a mixture model distribution (Fig. 1a). In this framework, individual arms receive a collection of non-identical copies, {xa,xb, . . .} of the given sample, x, belonging to the same category. While each arm has its own mixture representation with potentially non-identical parameters, all arms cooperate to learn q(ca|xa), where ca = cb = · · · , via a cost function at the time of training. Accordingly, a crowd of VAEs with A arms can be formulated as a collection of constrained variational objectives as follows.
max Ls1|c1(φ1,θ1) + · · ·+ LsA|cA(φA,θA) s.t. c1 = · · · = cA
(2)
where Lsa|ca(φa,θa) is the variational loss for arm a,
Lsa|ca(φa,θa) = Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] − Eq(sa|ca,xa) [DKL (q(ca|xa)‖p(ca))] . (3)
In Eq. 3, the variational loss for each arm is defined according to the graphical model in Fig. 1b, which is built upon the traditional ELBO in Eq. 1 by conditioning the continuous state on the categorical variable (derivation in Supplementary Section B). Therefore, learning an interpretable decomposition of the data relies on accurate assignment (inference) of the categorical latent factor. Propositions 1 and 2 below show that the shared categorical assignment inferred from q(c|x1, · · · ,xA), under the
c = c1 = · · · = cA constraint of the multi-arm framework improves the accuracy of the categorical assignment on expectation. Proposition 1. Consider the problem of mixture representation learning in a multi-arm VAE framework. For independent samples from category m, i.e. xi ∼ p(x|m),
Eq(x|m) [log q(c = m|{xi}1:A)] > Eq(x|m) [log q(c = m|{xi}1:B)] s.t. c = c1 = · · · = cA s.t. c = c1 = · · · = cB (4)
if q(m|xi) < 1 and A > B ≥ 1 denote the number of arms. (Proof in Supplementary Section A)
Thus, having more arms increases the expected log posterior for the true categorical latent variable unless it is already at its maximum. Proposition 2. In the A-arm VAE framework, there exists an A that guarantees a true categorical assignment on expectation. That is,
m = arg max c
Eq(x|m) [log q(c|{xi}1:A)] , s.t. c = c1 = · · · = cA . (5)
(Proof in Supplementary Section A) Accordingly, the consensus constraint is sufficient to enhance inference for mixture representations in the A-arm VAE framework. Our theoretical results show that the required number of arms satisfying Eq. 5 is a function of the categorical distribution and the likelihood (Eq. 15, Supplementary Section A). In the particular case of uniformly distributed categories, one pair of coupled arms is enough to satisfy Eq. 5 (see Corollary 1, Supplementary Section A).
We emphasize that the proposed framework does not require any weak supervision as in (Bouchacourt et al., 2017). Instead, it relies on representations that are invariant under non-identical copies of observations. Moreover, unlike (Bouchacourt et al., 2017; Shu et al., 2019; Locatello et al., 2020), the multi-arm framework is not restricted to the continuous space.
Arms observe non-identical copies of samples. In the A-arm VAE framework, arms receive non-identical observations that share the discrete variational factor. To achieve this in a fully unsupervised setting, we use type-preserving data augmentation that generates independent and identically distributed copies of data while preserving its categorical identity. For image datasets, conventional transformations such as rotation, scaling, or translation can serve as type-preserving augmentations. However, for non-image datasets, e.g. single-cell data, we seek a generative model that learns transformations representing within-class variability in an unsupervised manner. To this end, inspired by DAGAN (Antoniou et al., 2017) and VAE-GAN (Larsen et al., 2016), we develop a generative model to provide collections of observations for our multi-arm framework (Supplementary Section F). The proposed generative model learns to generate augmented samples in the vicinity of given samples in the latent space, without knowing their types (Eq. 70). In Supplementary Section A, Remark 2, we further discuss an under-exploration scenario in data augmentation, in which the augmented samples are not independently distributed and are concentrated around the given sample.
3.2 CPL-MIXVAE: PAIRWISE COUPLING IN A-ARM VAE
In the A-arm VAE framework, the mixture representation is obtained through the optimization in Eq. 2. Not only is it challenging to solve the maximization in Eq. 2 due to the equality constraint, but the objective remains a function of p(c) which is unknown, and typically non-uniform. To overcome this, we use an equivalent formulation for Eq. 2 by applying the pairwise coupling paradigm as follows (details of derivation in Supplementary Section C):
max
A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) −∑
a<b
Eq(sa|ca,xa)Eq(sb|cb,xb) [DKL (q(ca|xa)q(cb|xb)‖p(ca, cb))]
s.t. ca = cb ∀a, b ∈ [1, A], a < b (6) We relax the optimization in Eq. 6 into an unconstrained problem by marginalizing the joint distribution over a mismatch measure between categorical variables (see Supplementary Section D):
max A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) +∑
a<b
H(ca|xa) +H(cb|xb)− λEq(ca,cb|xa,xb) [ d2(ca, cb) ] (7)
In Eq. 7, in addition to entropy-based confidence penalties known as mode collapse regularizers (Pereyra et al., 2017), the distance measure d(ca, cb) encourages a consensus on the categorical assignment controlled by λ ≥ 0, the coupling hyperparameter. We refer to the model in Eq. 7 as cpl-mixVAE (Fig. 1a). In cpl-mixVAE, VAE arms try to achieve identical categorical assignments while independently learning their own style variables. In experiments, we set λ = 1 universally. While the bottleneck architecture already encourages interpretable continuous variables, this formulation can be easily extended to include an additional hyperparameter to promote disentanglement of continuous variables as in β-VAE (Higgins et al., 2017). Additional analyses to assess the sensitivity of the cpl-mixVAE’s performance to its coupling factor can be found in the Supplementary Section G.
It may be instructive to cast Eq. 7 in an equivalent constrained optimization form.
Remark 1. The A-arm VAE framework is a collection of constrained variational models as follows:
max A∑ a=1 Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] +H(ca|xa)
s.t. Eq(ca|xa) [ d2(ca, cb) ] < (8)
where denotes the strength of the consensus constraint. Here, cb indicates the assigned category by any one of the arms, b ∈ {1, . . . , A}, imposing structure on the discrete variable to approximate its prior distribution.
Distance between categorical variables. d(ca, cb) denotes the distance between a pair of |c|dimensional un-ordered categorical variables, which are associated with probability vectors with non-negative entries and sum-to-one constraint that form a K-dimensional simplex, where K = |c|. In the real space, a typical choice to compute the distance between two vectors is using Euclidean geometry. However, this geometry is not suitable for probability vectors. Here, we utilize Aitchison geometry (Aitchison, 1982; Egozcue et al., 2003), which defines a vector space on the simplex. Accordingly, the distance in the simplex, i.e. dSK (ca, cb) is defined as dSK (ca, cb) = ‖clr(ca)− clr(cb)‖2, ∀ca, cb ∈ SK , where clr(·) denotes the isometric centered-log-ratio transformation in the simplex. This categorical distance satisfies the conditions of a mathematical metric according to Aitchison geometry.
3.3 SEEKING CONSENSUS IN THE SIMPLEX
An instance of the mode collapse problem (Lucas et al., 2019) manifests itself in the minimization of dSK (ca, cb) (Eq. 7): its trivial local optima encourages the network to abuse the discrete latent factor by ignoring many of the available categories. In the extreme case, the representations can collapse onto a single category; ca = cb = c0. In this scenario, the continuous variable is compelled to act as a primary latent factor, while the model fails to deliver an interpretable mixture representation despite achieving an overall low loss value. To avoid such undesirable local equilibria while training, we add perturbations to the categorical representation of each arm. If posterior probabilities in the simplex have small dispersion, the perturbed distance calculation overstates the discrepancies. Thus, instead of minimizing d2SK (ca, cb), we minimize a perturbed distance
d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
, which corresponds to the distance between additively perturbed ca and cb vectors in Aitchison geometry. Here, σ2ak and σ 2 bk
indicate the mini-batch variances of the k-th category, for arms a and b. We next show that the perturbed distance dσ(·) is bounded by dSK (·) and non-negative values ρu, ρl: Proposition 3. Suppose ca, cb ∈ SK , where SK is a simplex ofK > 0 parts. If dSK (ca, cb) denotes the distance in Aitchison geometry and d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
denotes a perturbed distance, then
d2SK (ca, cb)− ρl ≤ d 2 σ (ca, cb) ≤ d2SK (ca, cb) + ρu
where ρu, ρl ≥ 0, ρu = K ( τ2σu + τ 2 c ) + 2∆στc, ρl = ∆2σ K −Kτ2σl , τc = maxk {log cak − log cbk}, τσu = max k {gk}, τσl = max k {−gk}, ∆σ = ∑ k gk, and gk = (σ−1ak − 1) log cak − (σ −1 bk − 1) log cbk . (Proof in Supplementary Section E)
Thus, when ca and cb are similar and their spread is not small, dσ(ca, cb) closely approximates dSK (ca, cb). Otherwise, it diverges from dSK (·) to avoid mode collapse.
4 EXPERIMENTS
We used four datasets: dSprites, MNIST, and two single-cell RNA sequencing (scRNA-seq) datasets; Smart-seq ALM-VISp (Tasic et al., 2018) and 10X MOp (Yao et al., 2021). Although dSprites and MNIST datasets do not require high-dimensional settings for mixture representation, to facilitate comparisons of cpl-mixVAE with earlier methods, first we report the results for these benchmark datasets. We trained three unsupervised VAE-based methods for mixture modeling: JointVAE (Dupont, 2018), CascadeVAE (Jeong & Song, 2019), and ours (cpl-mixVAE). For MNIST, we additionally trained the popular InfoGAN (Chen et al., 2016) as the most comparable GAN-based model. To show the interpretability of the mixture representations, (i) for the discrete latent factor, we report the accuracy (ACC) of categorical assignments and the DKL(q(c)‖p(c))), (ii) for the continuous variable, we perform latent traversal analysis by fixing the discrete factor and changing the continuous variable according to p(s|c,x). We calculated the accuracy by using minimum weight matching (Kuhn, 1955) to match the categorical variables obtained by cpl-mixVAE with the available cluster labels for each dataset. Additionally, we report the computational efficiency (number of iterations per second) to compare the training complexity of the multi-arm framework against earlier methods (Table 1). All reported numbers for cpl-mixVAE models are average accuracies calculated across arms. In VAE-based models, to sample from q(ca|xa), we use the Gumbel-softmax distribution (Jang et al., 2016; Maddison et al., 2014). In cpl-mixVAE, each arm received an augmented copy of the original input generated by the deep generative augmenter (Supplementary Section F) during training. Details of the network architectures and training settings can be found in Supplementary Section L.
4.1 BENCHMARK DATASETS
dSprites. dSprites is procedurally generated from discrete (3 shapes) and continuous (6 style factors: scale, rotation, and position) latent factors. Based on the uniform distribution of classes, we used a 2-arm cpl-mixVAE with |c| = 3 and |s| = 6 to learn interpretable representations. Results in Table 1 show that our method outperforms the other methods in terms of categorical assignment accuracy. In addition to demonstrating the traversal results (Fig. 2, bottom row), we report disentanglement scores (DS in Table 1). Even though the continuous factors do not depend on the discrete factors in this synthetic dataset, we did not change the architecture and expected the network to infer this independence. For a fair comparison, we used the same disentanglement metric implemented for CascadeVAE (Jeong & Song, 2019).
MNIST. Similarly, due to the uniform distribution of digit labels in MNIST, we again used a 2-arm cpl-mixVAE model. Following the convention (Dupont, 2018; Jeong & Song, 2019; Bouchacourt
et al., 2017), each arm of cpl-mixVAE uses a 10-dimensional categorical variable representing digits (type), and a 10-dimensional continuous random variable representing the writing style (state). Table 1 displays the accuracy of the categorical assignment and the discrepancy between q(c) and p(c) for InfoGAN, two 1-arm VAE methods (JointVAE and CascadeVAE), and cpl-mixVAE with 2 arms. Additionally, to isolate the impact of data augmentation in training, we trained JointVAE† and CascadeVAE† where the models were trained with the same augmented copies of the original MNIST dataset as cpl-mixVAE. The results in Table 1 suggest that data augmentation by itself does not enhance the performance. Fig. 2 (top row) illustrates the continuous latent traversals for four dimensions of the state variable inferred by cpl-mixVAE, where each row corresponds to a different dimension of the categorical variable, and the state variable monotonically changes across columns. Both results in Table 1 and Fig. 2 show that cpl-mixVAE achieved an interpretable mixture representation with the highest categorical assignment accuracy.
Summary. cpl-mixVAE improves the discrete density approximation and infers better mixture representations. It outperforms earlier methods, without using extraneous optimization or heuristic channel capacities. Beyond performance and robustness, its computational cost is also comparable to that of the baselines.
4.2 SINGLE-CELL RNA SEQUENCING DATA
In this dataset the observations are individual cells and each observation consists of expressions of thousands of genes. Here, we used two scRNA-seq datasets: (i) Smart-seq ALM-VISp (Tasic et al., 2018) and (ii) 10X MOp (Yao et al., 2021). The Smart-seq dataset includes transcriptomic profiles of more than 10, 000 genes for ∼ 22, 000 cells from the mouse anterior lateral motor cortex (ALM) and the primary visual cortex (VIPs). The 10X MOp dataset profiles ∼ 123000 cells in the mouse primary motor cortex (MOp) with the droplet-based 10X Genomics Chromium platform. 10X-based data often display more gene dropouts, especially for genes with lower expression levels. scRNA-seq datasets are significantly more complex than typical benchmark datasets due to (i) large number of cell types (discrete variable), and (ii) class imbalance; in the 10X MOp dataset, for instance, the mostand the least-abundant cell types include 17, 000 and 20 samples, respectively. Moreover, whether the observed diversity corresponds to discrete variability or a continuum is an ongoing debate in neuroscience (Scala et al., 2020). While using genes that are differentially expressed in subsets of cells, known as marker genes (MGs) (Trapnell, 2015) is a common approach to define cell types, the identified genes rarely obey the idealized MG definition in practice. Here, we focus on neuronal cells and use a subset of 5, 000 highest variance genes. The original MG-based studies for each dataset suggested 115 (Smart-seq data) (Tasic et al., 2018) and 140 (10X-based data) (Yao et al., 2021) discrete neuronal types.
Neuron type identification. Based on the suggested taxonomies in (Tasic et al., 2018; Yao et al., 2021), for the Smart-seq ALM-VISp data, we used 115- and 2-dimensional discrete and continuous variables, and for the 10X MOp data, we used 140- and 2-dimensional discrete and continuous latent variables. We compared the suggested cell types in (Tasic et al., 2018; Yao et al., 2021) with the discrete representations that are inferred from VAE models. Table 1 and Fig. 3(a-b) demonstrate the
performance of a 2-arm cpl-mixVAE model against JointVAE and CascadeVAE. In Fig. 3a.1 and 3b.1, we observe that for both datasets, JointVAE succeeds in identifying (sub)classes of neurons, e.g. excitatory class, or Sst subclass, but not neuronal types at leaf nodes. On the other hand, CascadeVAE learns an almost uniform distribution over all types despite a sizeable difference between the relative abundances of neuronal types (Fig. 3a.2 and 3b.2). Our results in Fig. 3(a-b) clearly show that cplmixVAE outperforms JointVAE and CascadeVAE in identifying meaningful known cell types. The confusion matrices in Fig. 3a.3 and 3b.3 demonstrate that even the inaccurate categorical assignments of cpl-mixVAE are still close to the matrix diagonals, suggesting a small cophenetic distance. That is, those cells are still assigned to nearby cell types in the dendrogram.
Using A > 2. Unlike the discussed benchmark datasets, the neuronal types are not uniformly distributed. Accordingly, we also investigated the accuracy improvement for categorical assignment when more than two arms are used. Fig. 3c illustrates the accuracy improvement with respect to a single autoencoder model, i.e. JointVAE, in agreement with our theoretical findings.
Identifying genes regulating cell activity. To examine the role of the continuous latent variable, we applied a similar traversal analysis to that used for the benchmark datasets. For a given cell sample and its discrete type, we changed each dimension of the continuous variable using the conditional distribution, and inspected gene expression changes caused by continuous variable alterations. Fig. 4 shows the results of the continuous traversal study for JointVAE and cpl-mixVAE, for two excitatory neurons belonging to the “L5 NP” (cell type (I)) and “L6 CT” (cell type (II)) sub-classes in ALM and MOp regions. Note that here, JointVAE is equivalent to a 1-arm VAE, with the exception of the type dependence of the state variable. Since CascadeVAE did not learn meaningful clustering of cells, even at the subclass level, we did not consider it for the continuous factor analysis. In each sub-figure, the latent traversal is color-mapped to normalized reconstructed expression values, where the y-axis corresponds to one dimension of the continuous variable, and the x-axis corresponds to three gene subsets, namely (i) MGs for the two excitatory types, (ii) immediate early genes (IEGs), and (iii) housekeeping gene (HKG) subgroups (Hrvatin et al., 2018; Tarasenko et al., 2017). For cpl-mixVAE (Fig. 4b), the normalized expression of the reported MGs as indicators for excitatory cell types (discrete factors) is unaffected by changes of identified continuous variables. In contrast, for JointVAE (Fig. 4a), we observed that the normalized expression of some MGs (5 out of 10) are changed due to the continuous factor traversal. Additionally, we found that the expression changes inferred by cpl-mixVAE for IEGs and HKGs are essentially monotonically linked to the continuous variable, confirming that the expression of IEGs and HKGs depends strongly on the cell activity variations under different metabolic and environmental conditions. Conversely, JointVAE
fails to reveal such activity-regulated monotonicity for IEGs and HKGs. Furthermore, our results for cpl-mixVAE reveal that the expression of activity-regulated genes depends on the cell type, i.e. IEGs and HKGs respond differently to activation depending on their cell types (compare rows I and II in Fig. 4b). However, in Fig. 4a, since the baseline JointVAE does not take into account the dependency of discrete and continuous factors, it fails to reveal the dependence of activity-regulated expression to the cell type, and therefore produces identical expressions for both types (I) and (II). These findings are consistent over multiple randomly initialized runs (Supplementary Section K.1). See Supplementary Section K.2 for more results on other cell types and gene subsets.
Summary. The cpl-mixVAE model successfully identified the majority of known excitatory and inhibitory neurons in multiple cortical regions. Our findings suggest that cpl-mixVAE, by acknowledging the dependencies of continuous and categorical factors, captures relevant and interpretable continuous variability that can provide insight when deciphering the molecular mechanisms shaping the landscape of biological states, e.g. due to metabolism or disease.
4.3 ABLATION STUDIES
To elucidate the success of the A-arm VAE framework in mixture modeling, we investigate the categorical assignment performance under different training settings. Since CascadeVAE does not learn the categorical factors by variational inference, here we mainly study JointVAE (as a 1-arm VAE) and cpl-mixVAE (as a 2-arm VAE). In Section 4.1, we show that data augmentation by itself does not enhance the categorical assignment (JointVAE†). To understand whether architectural differences put JointVAE at a disadvantage, we trained JointVAE‡ (Table S1), which uses the same architecture as the one used in cpl-mixVAE. JointVAE‡ uses the same learning procedure as JointVAE, but its convolutional layers are replaced by fully-connected layers (see Supplementary Section J and L for details). The result for JointVAE‡ suggests that the superiority of cpl-mixVAE is not due to the network architecture either. We also examined the performance changes of the proposed 2-arm cpl-mixVAE under three different settings: (i) cpl-mixVAE∗, where coupled networks are not independent and network parameters are shared; (ii) cpl-mixVAEa, where only affine transformations are used for data augmentation; and (iii) cpl-mixVAE(s 6 | c), where the state variable is independent of the discrete variable (Table S1). Our results show that the proposed cpl-mixVAE obtained the best categorical assignments across all training settings. We also examined the accuracy of categorical assignments for the cpl-mixVAE model, under different dimensions of discrete latent variable, for both MNIST and scRNA-seq datasets (see Supplementary Section J). We experimentally observe that while JointVAE suffers from sensitivity to empirical choices of |c|, cpl-mixVAE is more robust in encoding the discrete variability, without suffering from mode collapse (Fig. S9 and Fig. S10).
5 CONCLUSION
We have proposed cpl-mixVAE as a multi-arm framework to apply the power of collective decision making in unsupervised joint representation learning of discrete and continuous factors, scalable to the high-dimensional discrete space. This framework utilizes multiple pairwise-coupled autoencoding arms with a shared categorical variable, while independently learning the continuous variables. Our experimental results for all datasets support the theoretical findings, and show that cpl-mixVAE outperforms comparable models. Importantly, for challenging scRNA-seq dataset, we showed that the proposed framework identifies biologically interpretable cell types and differentiate between type-dependent and activity-regulated genes. | 1. What is the main contribution of the paper, and what are the strengths and weaknesses of the proposed approach?
2. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
3. What are the limitations regarding the factorization of the generative model, and how does it compare to other approaches?
4. How does the consensus mechanism work, and how effective is it in avoiding mode collapse?
5. What are the differences between the proposed method and other baselines, such as JointVAE and CascadeVAE?
6. How does the paper handle non-uniform class distributions, and how effective is its approach in this regard?
7. What are the reviewer's concerns about the experimental setup and results, including the choice of baselines and the ordering of categories?
8. How does the paper validate its results, especially in terms of evaluating the quality of the unsupervised learning methods used? | Summary Of The Paper
Review | Summary Of The Paper
The paper presents a consensus model over a base-level VAE model that has categorical,
c
, and continuous,
s
latent components. The base-level model is instantiated a number of times (arms) where given a training instance
x
, its arm is fed with perturbations
x
i
of the instance. The different arms are constrained so that they assign the same category to the different
x
i
perturbations. The main motivation for such a consensus constraint is, if I am not mistaken, that it avoids the mode collapse problem. In addition, accroding to the paper, the suggested approach can handle situations in which we have very different number of training instances in the different categories.
Review
The paper considers the following factorisation of the the generative model
p
(
x
,
c
,
s
)
=
p
(
x
|
s
,
c
)
p
(
s
|
c
)
p
(
c
)
in which the continuous components are a function of the categorical, where the categorical variable captures different (discrete) classes within the data and the continuous component explains variations within them.
The paper is mostly easy to read, though there are some points that require clarifications. It also comes with a quite extensive, though not complete, set of experiments and analysis.
My main issue is that the baselines against which the paper compares do not exhibit the generative structure that is given above; both of them, JointVAE and CascadeVAE, assume that the categorical and discrete components are independent. This makes it difficult to assess whether the merits of the paper come from the factorization of the generative model that assumes the dependence of the continuous from the categorical component or whether it comes from the concensus mechanism. Such factorizations have been quite extensively discussed elsewhere, for example in Lavda et al, Data-dependent conditional priors for unsupervised learning of multimodal data, Entropy, 2020, where the authors discuss how such a factorisation actually protects from mode collapse, as well as a number of other issues relevant to the discussion in the current paper, such as how the different categories are used through entropy regularisers that naturally appear within the initial objective. Since the base level factorisation over which the present paper builds the consencus model is the same, it would have been very easy to include it as a considerably more relevant baseline in order to demonstrate whether the benefits come from the concensus approach as well as include a discussion of differences from Lavda et al.
One more point that I have missed concerns the claim of the paper that the multitude of arms allows to deal with cases where the class distribution is not uniform, is this something that is demonstrated in section 3.2? If yes, I did not understand what is it that makes the model robust to non-uniform
p
(
c
)
. Related to that are the suggestions to use 2 arms when the distribution of
p
(
c
)
is uniform.
Details:
In eq~3 which gives the variational loss for arm
α
the term
E
q
(
s
a
|
c
a
,
x
a
[
D
K
L
(
q
(
c
a
|
x
a
)
|
|
p
(
c
a
)
]
should probably be simplified to just the KL term
D
K
L
(
q
(
c
a
|
x
a
)
|
|
p
(
c
a
)
)
, since the distributions in the latter do not depend on the
s
a
variable with respect to which the expectation is taken. See for example the respective derivation in Lavda et al, 2020.
In figure b) the graphical model describes the generative model
p
(
x
|
s
,
c
)
p
(
s
|
c
)
p
(
c
)
however what is given just bellow seems to be the inference model even though
p
is used instead of
q
.
I have a bit of a problem conceptualising
q
(
c
|
x
1
,
.
.
.
,
x
A
)
, how is this posterior instantiated concretetly? or is it only defined implicitly through equation 2? After going through the proof of proposition 1. in the appendix this is clear.
Proposition 1. says that as the number of arms/experts increases the likelihod of the true category will also increase.
Proposition 1. :
what is the difference of
x
i
∼
p
(
x
|
m
)
and the
q
(
x
|
m
)
that appears in the expectation?
p
(
x
|
m
)
never appears in the derivation in the appendix. Is
q
(
x
|
m
)
meant to denote that we randomly draw an
x
and
p
(
x
|
m
)
the noisy versions of it, in which case it would probably be more appropriate to have
p
(
x
i
|
x
)
. By the way this two level sampling is missing from the derivations, though I am not sure it is needed, but I find confusing the fact that there are two types of rvs here the original
x
and its noisy versions
x
i
, even if I get the point. For example in eq. 5 we marginalise over
x
a quantity that does not contain
x
but a random variable of
x
the
x
i
.
appendix, eq 5: the denominator has dissapeared and repappears again in 6.
Since the method works by operating on perturbed versions of the training instance one needs to define a perturbation model. The paper proposes a generative model based on GAN-VAE where the main desiredata is to perturbe the instances while not alterning their latent categorical code, the non-observed class.
When computing the distance over the categorical values
c
a
it seems that the paper considers that
c
a
∈
S
K
, i.e. they belong to the probability simplex. However
c
a
is not a probability vector but a sample from such a categorical distribution, which should normally be a one hot-encoded vector. Which is the case? i.e. are the distances computed over the samples or over the respective probabilities? How relevant is this?
Experiments:
the paper said that when the distribution of classes is uniform we should use a 2-arm (dSprites and MNIST), how is this motivated? If I understood correctly the theoretical results, these show that when we increase the number of arms we will have a larger log posterior for the true categorical variable, but I am not sure I saw a relation to the uniformicity of the categories.
In the real world experiments, figure 3, how are the categories (i.e. the columns) ordered? I presume the ordering of the three different algorithms is not really comparable and what really matters is how these align with the real cell types. I am just curious is the orderning based on the largest values that one gets in the diagonal of the confusion matrix?
In the same figure, it seems that the discrete compoments that the proposed method uncovers are in agreement with the hierarchical clustering of the cells. So what we have here are two unsupervised methods the results of which are in agreement. I guess that the hierachical clustering results have been validated by the domain experts in (Tasic et al., 2018; Yao et al., 2021), I am just wondering how to approach such an evaluation where there is no real ground truth. Of course one could say it is a good thing that two rather different methods agree on how they cluster cells.
The evaluation in figure 4 where a given continuous latent variable varies with the discrete component makes intuitive sense. The figure shows that the profiles produced by cpl-mixVAE varies smoothly within a given category and look rather different between the two categories. This is not the case for JointVAE, where the categorical component seems to bring no structure, the two different categories have rather similar profiles. The later is something to be expected, as also noted by the authors, since JointVAE assumes independence of the categorical and discrete component an assumption that makes it not the most appropriate baseline. On the same time, other than saying that intuitively the results of clp-mixVAE look good it is hard to make an additional comment, because this requires quite some expertise on the biology side. |
ICLR | Title
Mixture Representation Learning with Coupled Autoencoders
Abstract
Latent representations help unravel complex phenomena. While continuous latent variables can be efficiently inferred, fitting mixed discrete-continuous models remains challenging despite recent progress, especially when the discrete factor dimensionality is large. A pressing application for such mixture representations is the analysis of single-cell omic datasets to understand neuronal diversity and its molecular underpinnings. Here, we propose an unsupervised variational framework using multiple interacting networks called cpl-mixVAE that significantly outperforms state-of-the-art in high-dimensional discrete settings. cpl-mixVAE introduces a consensus constraint on discrete factors of variability across the networks, which regularizes the mixture representations at the time of training. We justify the use of this framework with theoretical results and validate it with experiments on benchmark datasets. We demonstrate that our approach discovers interpretable discrete and continuous variables describing neuronal identity in two single-cell RNA sequencing datasets, each profiling over a hundred cortical neuron types.
1 INTRODUCTION
Fitting mixed discrete-continuous models arises in many contexts. While continuous latent variables can be efficiently inferred with variational and adversarial formulations, inference of continuous and discrete factors in generalized mixture models remains challenging despite recent progress (Jang et al., 2016; Chen et al., 2016; Dupont, 2018; Jeong & Song, 2019). A pressing domain of application for such models is quantifying factors of biological variability in single-cell omic studies. The high-throughput and high-dimensional datasets produced by these studies document a previously unappreciated diversity of gene expression. In neuroscience, this poses the identification of cell types and cell states as a key research area to understand how neuronal circuits function (Bargmann et al., 2014), where the notions of cell types and states can be considered as biological interpretations of discrete and continuous variability. While marker gene based studies suggest the existence of more than 100 neuronal cell types in just a single brain region, there is no agreement on such categorization and interpretation of the remaining continuous variability (Seung & Sümbül, 2014; Zeng & Sanes, 2017; Tasic et al., 2018). Moreover, existing unsupervised, joint continuous-discrete learning methods are tailored for problems with relatively few and equally-abundant discrete components, and accurate inference remains out of reach for these applications.
Deep generative models have previously been applied to single-cell datasets, where the focus is on the cluster identity and it is typically inferred by post-hoc analysis of a continuous factor (Lopez et al., 2018). Deep Gaussian mixture models (Dilokthanakul et al., 2016; Johnson et al., 2016; Jiang et al., 2017) also focus on the identification of categories and do not take interpretability of the remaining continuous variability into account. To address the need for joint inference of interpretable discrete and continuous factors, various adversarial and variational methods have been proposed. While existing adversarial generative models, e.g. InfoGAN (Chen et al., 2016), are susceptible to stability issues (Higgins et al., 2017; Kim & Mnih, 2018), variational autoencoders (VAEs) (Kingma & Welling, 2013) emerge as efficient and more stable alternatives (Tschannen et al., 2018; Zhang et al., 2018; Dupont, 2018; Jeong & Song, 2019). VAE-based approaches approximate the mixture model by assuming a family of distributions qφ and select the member closest to the true model p. Popular choices in VAE implementations include (1) using KL divergence to compute discrepancy between qφ and p, and (2) using a multivariate Gaussian mixture distribution with uniformly distributed discrete and isotropic Gaussian distributed continuous priors. However, such choices may lead to
underestimating the posterior variance (Minka et al., 2005; Blei et al., 2017). Solutions to resolve this issue are mainly applicable in low-dimensional spaces or for continuous factors alone (Deasy et al., 2020; Kingma et al., 2016; Ranganath et al., 2016; Quiroz et al., 2018).
Inspired by collective decision making, we introduce a variational framework using multiple autoencoding arms to jointly infer interpretable finite discrete (categorical) and continuous factors in the presence of high-dimensional discrete space. Coupled-autoencoders have been previously studied in the context of multi-modal recordings, where each arm learns only a continuous latent representation for one of the data modalities (Feng et al., 2014; Gala et al., 2019; Lee & Pavlovic, 2020). Here, we develop a novel pairwise-coupled autoencoder framework for a single data modality. The proposed framework imposes a consensus constraint on the categorical posterior at the time of training and allows dependencies between continuous and categorical factors. We define the consensus constraint based on the Aitchison geometry in the probability simplex, which avoids the mode collapse problem. We show that the coupled multi-arm architecture enhances accuracy, robustness, and interpretability of the inferred factors without requiring any priors on the relative abundances of categories. Finally, on datasets profiling different cortical regions in the mammalian brain, we show that our method can be used to discover neuronal types as discrete categories and type-specific genes regulating the continuous within-type variability, such as metabolic state or disease state.
Related work. There is an extensive body of research on clustering in mixture models (Dilokthanakul et al., 2016; Jiang et al., 2017; Tian et al., 2017; Guo et al., 2016; Locatello et al., 2018b). The idea of improving the clustering performance through seeking a consensus and co-training and ensembling across multiple observations has been explored in both unsupervised (Monti et al., 2003; Kumar & Daumé, 2011) and semi-supervised contexts (Blum & Mitchell, 1998). However, these methods do not consider the underlying continuous variabilities across observations. Moreover, unlike ensemble methods, which pool the results of different trained workers, autoencoding arms seek a consensus at the time of learning in our framework.
The proposed framework does not need any supervision since the individual arms provide a form of prior or weak supervision for each other. In this regard, our paper is related to a body of work that attempts to improve representation learning by using semi-supervised or group-based settings (Bouchacourt et al., 2017; Hosoya, 2019; Nemeth, 2020). Bouchacourt et al. (2017) demonstrated a multi-level variational autoencoder (MLVAE) as a semi-supervised VAE by revealing that observations within groups share the same type. Hosoya (2019) and Nemeth (2020) attempted to improve MLVAE by imposing a weaker condition to the grouped data. In recent studies (Shu et al., 2019; Locatello et al., 2020), a weakly supervised variational setting has been proposed for disentangled representation learning by providing pairs of observations that share at least one underlying factor. These studies rely on learning latent variables in continuous spaces, and have been applied only to image datasets with low-dimensional latent representations.
Recent advances in structured variational methods, such as imposing a prior (Ranganath et al., 2016) or spatio-temporal dependencies (Quiroz et al., 2018) on the latent distribution parameters, allow for scaling to larger dimensions. However, these solutions are not directly applicable to the discrete space, which will be addressed in our A-arm VAE framework.
2 SINGLE MIXTURE VAE FRAMEWORK
For an observation x ∈ RD, a VAE learns a generative model pθ (x|z) and a variational distribution qφ (z|x), where z ∈ RM is a latent variable with a parameterized distribution p(z) and M D (Kingma & Welling, 2013). Disentangling different sources of variability into different dimensions of z enables an interpretable selection of latent factors (Higgins et al., 2017; Locatello et al., 2018a). However, the interplay between continuous and discrete variabilities present in many real-world datasets is often overlooked by existing methods. This problem can be addressed within the VAE framework in an unsupervised fashion by introducing a categorical latent variable c denoting the class label, alongside the continuous latent variable s. We refer to the continuous variable s as the state or style variable interchangeably. Assuming s and c are independent random variables, the evidence lower bound (ELBO) (Blei et al., 2017) for a single mixture VAE with the distributions parameterized by θ and φ is given by,
L(φ,θ) = Eqφ(s,c|x) [log pθ(x|s, c)] − DKL (qφ(s|x)‖p(s)) − DKL (qφ(c|x)‖p(c)) . (1) Maximizing ELBO in Eq. 1 imposes characteristics on q(s|x) and q(c|x) that can result in underestimation of posterior probabilities such as the mode collapse problem, where the network ignores a
subset of latent variables (Minka et al., 2005; Blei et al., 2017). Recently, VAE-based solutions were proposed by imposing a uniform structure on p(c): akin to β-VAE (Higgins et al., 2017; Burgess et al., 2018), JointVAE (Dupont, 2018) modifies the ELBO by assigning a pair of controlled information capacities for each variational factor, i.e. Cs ∈ R|s| and Cc ∈ R|c|. The main drawback of JointVAE is that its performance is tied to heuristic tuning of |s|× |c| capacities over training iterations so that it is vulnerable to mode collapse in high-dimensional settings. Another recent VAE-based mixture model solution, CascadeVAE (Jeong & Song, 2019), maximizes the ELBO through a semi-gradient-based algorithm by iterating over two separate optimizations for the continuous and categorical variables. While the separation of the optimization steps avoids the mode collapse problem, this separation is valid only when the categorical variable is uniformly distributed. Therefore, its performance strongly depends on the clusters having similar abundances in the dataset. Thus, earlier solutions fall short of learning interpretable mixture representations with high-dimensional discrete variables in real-world applications.
In addition to the issues discussed above, the performance and interpretability of those approaches are further limited by the common assumption that the continuous variable representing the style of the data is independent of the categorical variable. In practice, style often depends on the class label. For instance, even for the well-studied MNIST dataset, the histograms of common digit styles, e.g. “width”, markedly vary for different digits (Supplementary Section I). Moreover, further analysis of the identified continuous factor in the earlier approaches reveals that the independence assumption among q(s|x) and q(c|x) can be significantly violated (see Supplementary Sections H and I).
3 COUPLED MIXTURE VAE FRAMEWORK
The key intuition behind multi-arm networks is cooperation to improve posterior estimation. While the context is different, the popular phrase “wisdom of the crowd” (Surowiecki, 2005) can nevertheless be revealing: when a crowd (multiple arms) needs to make a decision, multiple estimates can increase the expected probability of a correct choice.
3.1 A-ARM VAE FRAMEWORK
We define the A-arm VAE as an A-tuple of independent and architecturally identical autoencoding arms, where the a-th arm parameterizes a mixture model distribution (Fig. 1a). In this framework, individual arms receive a collection of non-identical copies, {xa,xb, . . .} of the given sample, x, belonging to the same category. While each arm has its own mixture representation with potentially non-identical parameters, all arms cooperate to learn q(ca|xa), where ca = cb = · · · , via a cost function at the time of training. Accordingly, a crowd of VAEs with A arms can be formulated as a collection of constrained variational objectives as follows.
max Ls1|c1(φ1,θ1) + · · ·+ LsA|cA(φA,θA) s.t. c1 = · · · = cA
(2)
where Lsa|ca(φa,θa) is the variational loss for arm a,
Lsa|ca(φa,θa) = Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] − Eq(sa|ca,xa) [DKL (q(ca|xa)‖p(ca))] . (3)
In Eq. 3, the variational loss for each arm is defined according to the graphical model in Fig. 1b, which is built upon the traditional ELBO in Eq. 1 by conditioning the continuous state on the categorical variable (derivation in Supplementary Section B). Therefore, learning an interpretable decomposition of the data relies on accurate assignment (inference) of the categorical latent factor. Propositions 1 and 2 below show that the shared categorical assignment inferred from q(c|x1, · · · ,xA), under the
c = c1 = · · · = cA constraint of the multi-arm framework improves the accuracy of the categorical assignment on expectation. Proposition 1. Consider the problem of mixture representation learning in a multi-arm VAE framework. For independent samples from category m, i.e. xi ∼ p(x|m),
Eq(x|m) [log q(c = m|{xi}1:A)] > Eq(x|m) [log q(c = m|{xi}1:B)] s.t. c = c1 = · · · = cA s.t. c = c1 = · · · = cB (4)
if q(m|xi) < 1 and A > B ≥ 1 denote the number of arms. (Proof in Supplementary Section A)
Thus, having more arms increases the expected log posterior for the true categorical latent variable unless it is already at its maximum. Proposition 2. In the A-arm VAE framework, there exists an A that guarantees a true categorical assignment on expectation. That is,
m = arg max c
Eq(x|m) [log q(c|{xi}1:A)] , s.t. c = c1 = · · · = cA . (5)
(Proof in Supplementary Section A) Accordingly, the consensus constraint is sufficient to enhance inference for mixture representations in the A-arm VAE framework. Our theoretical results show that the required number of arms satisfying Eq. 5 is a function of the categorical distribution and the likelihood (Eq. 15, Supplementary Section A). In the particular case of uniformly distributed categories, one pair of coupled arms is enough to satisfy Eq. 5 (see Corollary 1, Supplementary Section A).
We emphasize that the proposed framework does not require any weak supervision as in (Bouchacourt et al., 2017). Instead, it relies on representations that are invariant under non-identical copies of observations. Moreover, unlike (Bouchacourt et al., 2017; Shu et al., 2019; Locatello et al., 2020), the multi-arm framework is not restricted to the continuous space.
Arms observe non-identical copies of samples. In the A-arm VAE framework, arms receive non-identical observations that share the discrete variational factor. To achieve this in a fully unsupervised setting, we use type-preserving data augmentation that generates independent and identically distributed copies of data while preserving its categorical identity. For image datasets, conventional transformations such as rotation, scaling, or translation can serve as type-preserving augmentations. However, for non-image datasets, e.g. single-cell data, we seek a generative model that learns transformations representing within-class variability in an unsupervised manner. To this end, inspired by DAGAN (Antoniou et al., 2017) and VAE-GAN (Larsen et al., 2016), we develop a generative model to provide collections of observations for our multi-arm framework (Supplementary Section F). The proposed generative model learns to generate augmented samples in the vicinity of given samples in the latent space, without knowing their types (Eq. 70). In Supplementary Section A, Remark 2, we further discuss an under-exploration scenario in data augmentation, in which the augmented samples are not independently distributed and are concentrated around the given sample.
3.2 CPL-MIXVAE: PAIRWISE COUPLING IN A-ARM VAE
In the A-arm VAE framework, the mixture representation is obtained through the optimization in Eq. 2. Not only is it challenging to solve the maximization in Eq. 2 due to the equality constraint, but the objective remains a function of p(c) which is unknown, and typically non-uniform. To overcome this, we use an equivalent formulation for Eq. 2 by applying the pairwise coupling paradigm as follows (details of derivation in Supplementary Section C):
max
A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) −∑
a<b
Eq(sa|ca,xa)Eq(sb|cb,xb) [DKL (q(ca|xa)q(cb|xb)‖p(ca, cb))]
s.t. ca = cb ∀a, b ∈ [1, A], a < b (6) We relax the optimization in Eq. 6 into an unconstrained problem by marginalizing the joint distribution over a mismatch measure between categorical variables (see Supplementary Section D):
max A∑ a=1 (A− 1) ( Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] ) +∑
a<b
H(ca|xa) +H(cb|xb)− λEq(ca,cb|xa,xb) [ d2(ca, cb) ] (7)
In Eq. 7, in addition to entropy-based confidence penalties known as mode collapse regularizers (Pereyra et al., 2017), the distance measure d(ca, cb) encourages a consensus on the categorical assignment controlled by λ ≥ 0, the coupling hyperparameter. We refer to the model in Eq. 7 as cpl-mixVAE (Fig. 1a). In cpl-mixVAE, VAE arms try to achieve identical categorical assignments while independently learning their own style variables. In experiments, we set λ = 1 universally. While the bottleneck architecture already encourages interpretable continuous variables, this formulation can be easily extended to include an additional hyperparameter to promote disentanglement of continuous variables as in β-VAE (Higgins et al., 2017). Additional analyses to assess the sensitivity of the cpl-mixVAE’s performance to its coupling factor can be found in the Supplementary Section G.
It may be instructive to cast Eq. 7 in an equivalent constrained optimization form.
Remark 1. The A-arm VAE framework is a collection of constrained variational models as follows:
max A∑ a=1 Eq(sa,ca|xa) [log p(xa|sa, ca)]− Eq(ca|xa) [DKL (q(sa|ca,xa)‖p(sa|ca))] +H(ca|xa)
s.t. Eq(ca|xa) [ d2(ca, cb) ] < (8)
where denotes the strength of the consensus constraint. Here, cb indicates the assigned category by any one of the arms, b ∈ {1, . . . , A}, imposing structure on the discrete variable to approximate its prior distribution.
Distance between categorical variables. d(ca, cb) denotes the distance between a pair of |c|dimensional un-ordered categorical variables, which are associated with probability vectors with non-negative entries and sum-to-one constraint that form a K-dimensional simplex, where K = |c|. In the real space, a typical choice to compute the distance between two vectors is using Euclidean geometry. However, this geometry is not suitable for probability vectors. Here, we utilize Aitchison geometry (Aitchison, 1982; Egozcue et al., 2003), which defines a vector space on the simplex. Accordingly, the distance in the simplex, i.e. dSK (ca, cb) is defined as dSK (ca, cb) = ‖clr(ca)− clr(cb)‖2, ∀ca, cb ∈ SK , where clr(·) denotes the isometric centered-log-ratio transformation in the simplex. This categorical distance satisfies the conditions of a mathematical metric according to Aitchison geometry.
3.3 SEEKING CONSENSUS IN THE SIMPLEX
An instance of the mode collapse problem (Lucas et al., 2019) manifests itself in the minimization of dSK (ca, cb) (Eq. 7): its trivial local optima encourages the network to abuse the discrete latent factor by ignoring many of the available categories. In the extreme case, the representations can collapse onto a single category; ca = cb = c0. In this scenario, the continuous variable is compelled to act as a primary latent factor, while the model fails to deliver an interpretable mixture representation despite achieving an overall low loss value. To avoid such undesirable local equilibria while training, we add perturbations to the categorical representation of each arm. If posterior probabilities in the simplex have small dispersion, the perturbed distance calculation overstates the discrepancies. Thus, instead of minimizing d2SK (ca, cb), we minimize a perturbed distance
d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
, which corresponds to the distance between additively perturbed ca and cb vectors in Aitchison geometry. Here, σ2ak and σ 2 bk
indicate the mini-batch variances of the k-th category, for arms a and b. We next show that the perturbed distance dσ(·) is bounded by dSK (·) and non-negative values ρu, ρl: Proposition 3. Suppose ca, cb ∈ SK , where SK is a simplex ofK > 0 parts. If dSK (ca, cb) denotes the distance in Aitchison geometry and d2σ(ca, cb) = ∑ k ( σ−1ak log cak − σ −1 bk log cbk )2
denotes a perturbed distance, then
d2SK (ca, cb)− ρl ≤ d 2 σ (ca, cb) ≤ d2SK (ca, cb) + ρu
where ρu, ρl ≥ 0, ρu = K ( τ2σu + τ 2 c ) + 2∆στc, ρl = ∆2σ K −Kτ2σl , τc = maxk {log cak − log cbk}, τσu = max k {gk}, τσl = max k {−gk}, ∆σ = ∑ k gk, and gk = (σ−1ak − 1) log cak − (σ −1 bk − 1) log cbk . (Proof in Supplementary Section E)
Thus, when ca and cb are similar and their spread is not small, dσ(ca, cb) closely approximates dSK (ca, cb). Otherwise, it diverges from dSK (·) to avoid mode collapse.
4 EXPERIMENTS
We used four datasets: dSprites, MNIST, and two single-cell RNA sequencing (scRNA-seq) datasets; Smart-seq ALM-VISp (Tasic et al., 2018) and 10X MOp (Yao et al., 2021). Although dSprites and MNIST datasets do not require high-dimensional settings for mixture representation, to facilitate comparisons of cpl-mixVAE with earlier methods, first we report the results for these benchmark datasets. We trained three unsupervised VAE-based methods for mixture modeling: JointVAE (Dupont, 2018), CascadeVAE (Jeong & Song, 2019), and ours (cpl-mixVAE). For MNIST, we additionally trained the popular InfoGAN (Chen et al., 2016) as the most comparable GAN-based model. To show the interpretability of the mixture representations, (i) for the discrete latent factor, we report the accuracy (ACC) of categorical assignments and the DKL(q(c)‖p(c))), (ii) for the continuous variable, we perform latent traversal analysis by fixing the discrete factor and changing the continuous variable according to p(s|c,x). We calculated the accuracy by using minimum weight matching (Kuhn, 1955) to match the categorical variables obtained by cpl-mixVAE with the available cluster labels for each dataset. Additionally, we report the computational efficiency (number of iterations per second) to compare the training complexity of the multi-arm framework against earlier methods (Table 1). All reported numbers for cpl-mixVAE models are average accuracies calculated across arms. In VAE-based models, to sample from q(ca|xa), we use the Gumbel-softmax distribution (Jang et al., 2016; Maddison et al., 2014). In cpl-mixVAE, each arm received an augmented copy of the original input generated by the deep generative augmenter (Supplementary Section F) during training. Details of the network architectures and training settings can be found in Supplementary Section L.
4.1 BENCHMARK DATASETS
dSprites. dSprites is procedurally generated from discrete (3 shapes) and continuous (6 style factors: scale, rotation, and position) latent factors. Based on the uniform distribution of classes, we used a 2-arm cpl-mixVAE with |c| = 3 and |s| = 6 to learn interpretable representations. Results in Table 1 show that our method outperforms the other methods in terms of categorical assignment accuracy. In addition to demonstrating the traversal results (Fig. 2, bottom row), we report disentanglement scores (DS in Table 1). Even though the continuous factors do not depend on the discrete factors in this synthetic dataset, we did not change the architecture and expected the network to infer this independence. For a fair comparison, we used the same disentanglement metric implemented for CascadeVAE (Jeong & Song, 2019).
MNIST. Similarly, due to the uniform distribution of digit labels in MNIST, we again used a 2-arm cpl-mixVAE model. Following the convention (Dupont, 2018; Jeong & Song, 2019; Bouchacourt
et al., 2017), each arm of cpl-mixVAE uses a 10-dimensional categorical variable representing digits (type), and a 10-dimensional continuous random variable representing the writing style (state). Table 1 displays the accuracy of the categorical assignment and the discrepancy between q(c) and p(c) for InfoGAN, two 1-arm VAE methods (JointVAE and CascadeVAE), and cpl-mixVAE with 2 arms. Additionally, to isolate the impact of data augmentation in training, we trained JointVAE† and CascadeVAE† where the models were trained with the same augmented copies of the original MNIST dataset as cpl-mixVAE. The results in Table 1 suggest that data augmentation by itself does not enhance the performance. Fig. 2 (top row) illustrates the continuous latent traversals for four dimensions of the state variable inferred by cpl-mixVAE, where each row corresponds to a different dimension of the categorical variable, and the state variable monotonically changes across columns. Both results in Table 1 and Fig. 2 show that cpl-mixVAE achieved an interpretable mixture representation with the highest categorical assignment accuracy.
Summary. cpl-mixVAE improves the discrete density approximation and infers better mixture representations. It outperforms earlier methods, without using extraneous optimization or heuristic channel capacities. Beyond performance and robustness, its computational cost is also comparable to that of the baselines.
4.2 SINGLE-CELL RNA SEQUENCING DATA
In this dataset the observations are individual cells and each observation consists of expressions of thousands of genes. Here, we used two scRNA-seq datasets: (i) Smart-seq ALM-VISp (Tasic et al., 2018) and (ii) 10X MOp (Yao et al., 2021). The Smart-seq dataset includes transcriptomic profiles of more than 10, 000 genes for ∼ 22, 000 cells from the mouse anterior lateral motor cortex (ALM) and the primary visual cortex (VIPs). The 10X MOp dataset profiles ∼ 123000 cells in the mouse primary motor cortex (MOp) with the droplet-based 10X Genomics Chromium platform. 10X-based data often display more gene dropouts, especially for genes with lower expression levels. scRNA-seq datasets are significantly more complex than typical benchmark datasets due to (i) large number of cell types (discrete variable), and (ii) class imbalance; in the 10X MOp dataset, for instance, the mostand the least-abundant cell types include 17, 000 and 20 samples, respectively. Moreover, whether the observed diversity corresponds to discrete variability or a continuum is an ongoing debate in neuroscience (Scala et al., 2020). While using genes that are differentially expressed in subsets of cells, known as marker genes (MGs) (Trapnell, 2015) is a common approach to define cell types, the identified genes rarely obey the idealized MG definition in practice. Here, we focus on neuronal cells and use a subset of 5, 000 highest variance genes. The original MG-based studies for each dataset suggested 115 (Smart-seq data) (Tasic et al., 2018) and 140 (10X-based data) (Yao et al., 2021) discrete neuronal types.
Neuron type identification. Based on the suggested taxonomies in (Tasic et al., 2018; Yao et al., 2021), for the Smart-seq ALM-VISp data, we used 115- and 2-dimensional discrete and continuous variables, and for the 10X MOp data, we used 140- and 2-dimensional discrete and continuous latent variables. We compared the suggested cell types in (Tasic et al., 2018; Yao et al., 2021) with the discrete representations that are inferred from VAE models. Table 1 and Fig. 3(a-b) demonstrate the
performance of a 2-arm cpl-mixVAE model against JointVAE and CascadeVAE. In Fig. 3a.1 and 3b.1, we observe that for both datasets, JointVAE succeeds in identifying (sub)classes of neurons, e.g. excitatory class, or Sst subclass, but not neuronal types at leaf nodes. On the other hand, CascadeVAE learns an almost uniform distribution over all types despite a sizeable difference between the relative abundances of neuronal types (Fig. 3a.2 and 3b.2). Our results in Fig. 3(a-b) clearly show that cplmixVAE outperforms JointVAE and CascadeVAE in identifying meaningful known cell types. The confusion matrices in Fig. 3a.3 and 3b.3 demonstrate that even the inaccurate categorical assignments of cpl-mixVAE are still close to the matrix diagonals, suggesting a small cophenetic distance. That is, those cells are still assigned to nearby cell types in the dendrogram.
Using A > 2. Unlike the discussed benchmark datasets, the neuronal types are not uniformly distributed. Accordingly, we also investigated the accuracy improvement for categorical assignment when more than two arms are used. Fig. 3c illustrates the accuracy improvement with respect to a single autoencoder model, i.e. JointVAE, in agreement with our theoretical findings.
Identifying genes regulating cell activity. To examine the role of the continuous latent variable, we applied a similar traversal analysis to that used for the benchmark datasets. For a given cell sample and its discrete type, we changed each dimension of the continuous variable using the conditional distribution, and inspected gene expression changes caused by continuous variable alterations. Fig. 4 shows the results of the continuous traversal study for JointVAE and cpl-mixVAE, for two excitatory neurons belonging to the “L5 NP” (cell type (I)) and “L6 CT” (cell type (II)) sub-classes in ALM and MOp regions. Note that here, JointVAE is equivalent to a 1-arm VAE, with the exception of the type dependence of the state variable. Since CascadeVAE did not learn meaningful clustering of cells, even at the subclass level, we did not consider it for the continuous factor analysis. In each sub-figure, the latent traversal is color-mapped to normalized reconstructed expression values, where the y-axis corresponds to one dimension of the continuous variable, and the x-axis corresponds to three gene subsets, namely (i) MGs for the two excitatory types, (ii) immediate early genes (IEGs), and (iii) housekeeping gene (HKG) subgroups (Hrvatin et al., 2018; Tarasenko et al., 2017). For cpl-mixVAE (Fig. 4b), the normalized expression of the reported MGs as indicators for excitatory cell types (discrete factors) is unaffected by changes of identified continuous variables. In contrast, for JointVAE (Fig. 4a), we observed that the normalized expression of some MGs (5 out of 10) are changed due to the continuous factor traversal. Additionally, we found that the expression changes inferred by cpl-mixVAE for IEGs and HKGs are essentially monotonically linked to the continuous variable, confirming that the expression of IEGs and HKGs depends strongly on the cell activity variations under different metabolic and environmental conditions. Conversely, JointVAE
fails to reveal such activity-regulated monotonicity for IEGs and HKGs. Furthermore, our results for cpl-mixVAE reveal that the expression of activity-regulated genes depends on the cell type, i.e. IEGs and HKGs respond differently to activation depending on their cell types (compare rows I and II in Fig. 4b). However, in Fig. 4a, since the baseline JointVAE does not take into account the dependency of discrete and continuous factors, it fails to reveal the dependence of activity-regulated expression to the cell type, and therefore produces identical expressions for both types (I) and (II). These findings are consistent over multiple randomly initialized runs (Supplementary Section K.1). See Supplementary Section K.2 for more results on other cell types and gene subsets.
Summary. The cpl-mixVAE model successfully identified the majority of known excitatory and inhibitory neurons in multiple cortical regions. Our findings suggest that cpl-mixVAE, by acknowledging the dependencies of continuous and categorical factors, captures relevant and interpretable continuous variability that can provide insight when deciphering the molecular mechanisms shaping the landscape of biological states, e.g. due to metabolism or disease.
4.3 ABLATION STUDIES
To elucidate the success of the A-arm VAE framework in mixture modeling, we investigate the categorical assignment performance under different training settings. Since CascadeVAE does not learn the categorical factors by variational inference, here we mainly study JointVAE (as a 1-arm VAE) and cpl-mixVAE (as a 2-arm VAE). In Section 4.1, we show that data augmentation by itself does not enhance the categorical assignment (JointVAE†). To understand whether architectural differences put JointVAE at a disadvantage, we trained JointVAE‡ (Table S1), which uses the same architecture as the one used in cpl-mixVAE. JointVAE‡ uses the same learning procedure as JointVAE, but its convolutional layers are replaced by fully-connected layers (see Supplementary Section J and L for details). The result for JointVAE‡ suggests that the superiority of cpl-mixVAE is not due to the network architecture either. We also examined the performance changes of the proposed 2-arm cpl-mixVAE under three different settings: (i) cpl-mixVAE∗, where coupled networks are not independent and network parameters are shared; (ii) cpl-mixVAEa, where only affine transformations are used for data augmentation; and (iii) cpl-mixVAE(s 6 | c), where the state variable is independent of the discrete variable (Table S1). Our results show that the proposed cpl-mixVAE obtained the best categorical assignments across all training settings. We also examined the accuracy of categorical assignments for the cpl-mixVAE model, under different dimensions of discrete latent variable, for both MNIST and scRNA-seq datasets (see Supplementary Section J). We experimentally observe that while JointVAE suffers from sensitivity to empirical choices of |c|, cpl-mixVAE is more robust in encoding the discrete variability, without suffering from mode collapse (Fig. S9 and Fig. S10).
5 CONCLUSION
We have proposed cpl-mixVAE as a multi-arm framework to apply the power of collective decision making in unsupervised joint representation learning of discrete and continuous factors, scalable to the high-dimensional discrete space. This framework utilizes multiple pairwise-coupled autoencoding arms with a shared categorical variable, while independently learning the continuous variables. Our experimental results for all datasets support the theoretical findings, and show that cpl-mixVAE outperforms comparable models. Importantly, for challenging scRNA-seq dataset, we showed that the proposed framework identifies biologically interpretable cell types and differentiate between type-dependent and activity-regulated genes. | 1. What is the focus of the paper regarding VAEs?
2. What are the strengths and weaknesses of the proposed method compared to prior works?
3. How does the reviewer assess the results presented in Table 1, particularly for MNIST?
4. Why is the accuracy low on MNIST according to the reviewer?
5. Is there any concern or suggestion regarding the reference of relevant works in the paper's introduction? | Summary Of The Paper
Review | Summary Of The Paper
The authors consider the problem of factorization of the hidden space of a VAE into two separate components Z=(S,C) where S is continuous and C is discrete. They make a particular assumption about the factorization of the encoder function q(S, C|X)=q(S|X)q(C|X) and they also take a mixture of “A” experts that are made to agree with their discrete assignment C.
Review
The paper compares the proposed method with a few similar methods, using MNIST but also using a dataset for single-cell RNA sequencing data (a domain with which I am not familiar). I am also not familiar with how much efforts are currently being put into this approach of factorizing Z=(S,C) by other members in the field.
My main question for the authors is to ask about the results in table 1 when it comes to MNIST: do the std measurements indicate a spread over the multiple experimental runs, and why is the accuracy so low on MNIST? MNIST is the kind of dataset where a linear classifier can get 90% accuracy, yet all these methods presented fare much worse.
Minor issue: I think it would be appropriate to mention Kingma & Welling in the background section when VAE are first mentioned in the paper instead of in the later Section 2 when they are described. The same paper can be referenced twice, but at least it should be referenced in the introduction in conjunction to the other papers on VAEs that the authors wish to mention. |
ICLR | Title
Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
Abstract
MOBA games, e.g., Dota2 and Honor of Kings, have been actively used as the testbed for the recent AI research on games, and various AI systems have been developed at the human level so far. However, these AI systems mainly focus on how to compete with humans, less on exploring how to collaborate with humans. To this end, this paper makes the first attempt to investigate human-agent collaboration in MOBA games. In this paper, we propose to enable humans and agents to collaborate through explicit communication by designing an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, for accomplishing effective human-agent collaboration in MOBA games. The MCC framework consists of two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, to bridge the communication gap between humans and agents; 2) a meta-command value estimator, i.e., the Meta-Command Selector, to select a valuable meta-command for each agent to achieve effective human-agent collaboration. Experimental results in Honor of Kings demonstrate that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates. Videos are available at https://sites.google.com/view/mcc-demo.
1 INTRODUCTION
Games, as the microcosm of real-world problems, have been widely used as testbeds to evaluate the performance of Artificial Intelligence (AI) techniques for decades. Recently, many researchers focus on developing various human-level AI systems for complex games, such as board games like Go (Silver et al., 2016; 2017), Real-Time Strategy (RTS) games like StarCraft 2 (Vinyals et al., 2019), and Multi-player Online Battle Arena (MOBA) games like Dota 2 (OpenAI et al., 2019). However, these AI systems mainly focus on how to compete instead of collaborating with humans, leaving Human-Agent Collaboration (HAC) in complex environments still to be investigated. In this paper, we study the HAC problem in complex MOBA games (Silva & Chaimowicz, 2017), which is characterized by multi-agent cooperation and competition mechanisms, long time horizons, enormous state-action spaces (1020000), and imperfect information (OpenAI et al., 2019; Ye et al., 2020a).
HAC requires the agent to collaborate reasonably with various human partners (Dafoe et al., 2020). One straightforward approach is to improve the generalization of agents, that is, to collaborate with a sufficiently diverse population of teammates during training. Recently, some population-based methods proposed to improve the generalization of agents by constructing a diverse population of partners in different ways, succeeding in video games (Jaderberg et al., 2017; 2019; Carroll et al., 2019; Strouse et al., 2021) and card games (Hu et al., 2020; Andrei et al., 2021). Furthermore, to better evaluate HAC agents, several objective as well as subjective metrics have been proposed (Du et al., 2020; Siu et al., 2021; McKee et al., 2022). However, the policy space in complex MOBA
*These authors contributed equally to this work.
games is enormous (Gao et al., 2021) and requires massive computing resources to build a sufficiently diverse population of agents, posing a big obstacle to the scalability of these methods.
The communication ability to explicitly share information with others is important for agents to collaborate effectively with humans (Dafoe et al., 2020). In Multi-Agent Reinforcement Learning (MARL), communication is often used to improve inter-agent collaboration. Previous work (Sukhbaatar et al., 2016; Foerster et al., 2016; Lazaridou et al., 2016; Peng et al., 2017; Mordatch & Abbeel, 2018; Singh et al., 2018; Das et al., 2019; Wang et al., 2020) mainly focused on exploring communication protocols between multiple agents. Other work (Ghavamzadeh & Mahadevan, 2004; Jiang & Lu, 2018; Kim et al., 2019) proposed to model the value of multi-agent communication for effective collaboration. However, these methods all model communication in latent spaces without considering the human-interpretable common ground (Clark & Brennan, 1991; Stalnaker, 2002) or lingua franca Kambhampati et al. (2022), making themselves less interpretable to humans. Explicit communication dominated by natural language is often considered in human-robot interaction (Kartoun et al., 2010; Liu et al., 2019; Shafti et al., 2020; Gupta et al., 2021). However, these studies are mainly limited to collaboration between a robot and a human through one-way communication, i.e., humans give robots orders. Therefore, there is still a large room to study RL with the participation of humans.
Success in MOBA games requires subtle individual micro-operations and excellent communication and collaboration among teammates on macro-strategies, i.e., long-term intentions (Wu, 2019; Gao et al., 2021). The micro-operation ability of the existing State-Of-The-Art (SOTA) MOBA agents has exceeded the high-level (top 1%) humans (Ye et al., 2020a). However, these agents’ macro-strategies are deterministic and quite different from those of humans (Ye et al., 2020a). Moreover, all existing SOTA MOBA AI systems lack bridges for explicit communication between agents and humans on macro-strategies. These result in the agent’s behavior not being understood immediately by humans (Ye et al., 2020a) and not performing well when collaborating with humans (see Section 4.3).
To this end, we propose an efficient and interpretable Meta-Command Communication-based humanagent collaboration framework, dubbed MCC, to achieve effective HAC in MOBA games through explicit communication. First, we design an interpretable communication protocol, i.e., the MetaCommand, as a general representation of macro-strategies to bridge the communication gap between agents and humans. Both macro-strategies sent by humans and messages outputted by agents can be converted into unified meta-commands (see Figure 1(b)). Second, following Gao et al. (2021), we construct a hierarchical model that includes the command encoding network (macro-strategy layer) and the meta-command conditioned action network (micro-action layer), used for agents to generate and execute meta-commands, respectively. Third, we propose a meta-command value estimator, i.e., the Meta-Command Selector, to select the optimal meta-command for each agent to execute. The training process of the MCC agent consists of three phases. We first train the command encoding network to ensure that the agent learns the distribution of meta-commands sent by humans. Afterward, we train the meta-command conditioned action network to ensure that the agent learns to execute meta-commands. Finally, we train the meta-command selector to ensure that the agent learns to select the optimal meta-commands to execute. We train and evaluate the agent in Honor of Kings 5v5 mode with a full hero pool (over 100 heroes). Experimental results demonstrate the effectiveness of the MCC framework. In general, our contributions are as follows:
• To the best of our knowledge, we are the first to investigate the HAC problem in MOBA games. We propose the MCC framework to achieve effective HAC in MOBA games.
• We design the Meta-Command to bridge the communication gap between humans and agents. We also propose the Meta-Command Selector to model the agent’s value system for meta-commands.
• We introduce the training process of the MCC agent in a typical MOBA game Honor of Kings and evaluate it in practical human-agent collaboration tests. Experimental results show that the MCC agent can reasonably collaborate with different levels and numbers of human teammates.
2 BACKGROUND
2.1 MOBA GAMES
MOBA games have recently received much attention from researchers, especially Honor of Kings (Wu, 2019; Ye et al., 2020a;b;c; Gao et al., 2021), one of the most popular MOBA games worldwide. The gameplay is to divide ten players into two camps to compete on the same map. The game environment is shown in Figure 1(a). Each camp competes for resources through individual micro-operations (A1, A2) and team collaboration on macro-strategies (B), and finally wins the game by destroying the enemy’s crystal. Players can communicate and collaborate with teammates through the in-game signaling system. Particularly, players can send macro-strategies by dragging signal buttons (C1, C2) to the corresponding locations in the mini-map (D), and these signals display to teammates in the mini-map (E). See Appendix A for detailed game introductions.
2.2 HUMAN-AGENT COLLABORATION
We consider an interpretable communicative human-agent collaboration task, which can be extended from the Partially Observable Markov Decision Process (POMDP) and formulated as a tuple < N,H,S,AN ,AH ,O,M, r, P, γ >, where N and H represent the numbers of agents and humans, respectively. S is the space of global states. AN = {ANi }i=1,...,N and AH = {AHi }i=1,...,H denote the spaces of actions of N agents and H humans, respectively. O = {Oi}i=1,...,N+H denotes the space of observations of N agents and H humans. M represents the space of interpretable messages. P : S × AN × AH → S and r : S × AN × AH → R denote the shared state transition probability function and reward function of N agents, respectively. Note that r includes both individual rewards and team rewards. γ ∈ [0, 1) denotes the discount factor. For each agent i in state st ∈ S, it receives an observation oit ∈ Oi and a selected message cit ∈ M, and then outputs an action ait = πθ(o i t, c i t) ∈ ANi and a new message mit+1 = πϕ(oit) ∈ M, where πθ and πϕ are action network and message encoding network, respectively. A message selector cit = πω(o i t, Ct) is introduced to select a message cit from a message set Ct = {mit}i=1,...,N+H ⊂ M. We divide the HAC problem in MOBA games into the Human-to-Agent (H2A) and the Agent-toHuman (A2H) scenarios. The H2A Scenario: Humans send their macro-strategies as messages to agents, and agents select the optimal one to collaborate with humans based on their value systems. The A2H Scenario: Agents send their messages as macro-strategies to humans, and humans select the optimal one to collaborate with agents based on their value systems. The goal of both scenarios is that agents and humans communicate macro-strategies with pre-defined communication protocols and then select valuable macro-strategies for effective collaboration to win the game.
3 META-COMMAND COMMUNICATION-BASED FRAMEWORK
In this section, we present the proposed MCC framework in detail. We first briefly describe three key stages of the MCC framework (Section 3.1). Then we introduce its two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, as a general representation of macro-strategies to bridge the communication gap between agents and humans (Section 3.2); 2) a meta-command value estimator, i.e., the Meta-Command Selector, to model the agent’s value system for meta-commands to achieve effective HAC in MOBA games (Section 3.3).
3.1 OVERVIEW
The process of the MCC framework consists of three stages: (I) the Meta-Command Conversion Stage, (II) the Meta-Command Communication Stage, and (III) the Human-Agent Collaboration
Stage, as shown in Figure 2. Notably, Stage I and II are executed at each communication step, and Stage III is executed at each time step. At Stage I, the MCC framework converts humans’ explicit messages and agents’ implicit messages into unified meta-commands mHt ,mt, respectively, and broadcasts them to all agents and humans. At Stage II, the MCC framework estimates the values of all received meta-commands Ct and selects the optimal one ct ∈ Ct for each agent to execute. The selected meta-command will remain unchanged between each two communication steps (e.g. within [t, t+ Tmc) time steps). At stage III, the MCC framework predicts a sequence of actions for each agent to perform based on its selected meta-command ct. In each game, humans and agents collaborate multiple times, that is, execute the three stages multiple times, to win the game.
3.2 META-COMMAND
We divide a macro-strategy into three components: where to go, what to do, and how long. For example, a macro-strategy can be Come And Kill The Dragon, which consists of Come To The Dragon Location (where to go), Attack The Dragon (what to do), and Until The Dragon Is Killed (how long). Thus, a general representation of macro-strategies, i.e., the Meta-Command, can be formulated as a tuple < L,E, Tmc >, as shown in Figure 1(b), where L is the Location to go, E is the Event to do after reaching L, and Tmc is the Time Limit for executing the meta-command.
Meta-Command Conversion. To realize bidirectional interpretable human-agent communication, the MCC framework converts humans’ explicit messages and agents’ implicit messages into unified metacommands. To achieve the former, we use the Command Converter function f cc (Appendix B.6.1) to extract corresponding location LH and event EH from explicit messages sent by humans in the in-game signaling system. To achieve the latter, we use a command encoder network (CEN) πϕ(m|o) to generate L and E based on the agent’s observation o. The CEN is trained via supervised learning (SL) with the goal of learning the distribution of meta-commands sent by humans (Appendix B.6.2). In MOBA game settings, we use a common location description, i.e., divide L of meta-commands in the map into 144 grids. Since the macro-strategy space is enormous (Gao et al., 2021), customizing corresponding rewards for each specific event to train the agent is not conducive to generalization and is even impossible. Instead, we train a micro-action network to learn to do optimal event E∗ at location L, just as humans do optimal micro-operations at location L based on their own value systems. We also do not specify a precise Tmc for the execution of each specific meta-command. Instead, we set Tmc to how long it takes a human to complete a macro-strategy in MOBA games. Usually, 20 seconds corresponds to an 80% completion rate, based on our statistics (Appendix B.6.2). Thus, the MCC framework converts humans’ explicit messages into meta-commands mH =< LH , EH , Tmc >, generates meta-commands m =< L,E∗, Tmc > for agents based on their observations (Figure 2(I)), and then broadcasts them to all agents and humans.
Meta-Command Execution. After receiving a meta-command candidate set, agents can select one meta-command from it to execute. Note that the MCC framework will replace EH with E∗ when the agent selects a meta-command from humans. We adopt the MCCAN πθ(a|o,m) for agents to perform actions based on the selected meta-command, as shown in Figure 3(a)(II). The MCCAN is trained via
self-play RL with the goal of achieving a high completion rate for the meta-commands while ensuring that the win rate is not reduced. To achieve this, we introduce extrinsic rewards r (including individual and team rewards) and intrinsic rewards rintt (st,mt, st+1) = ∣∣f ce(st)−mt∣∣ −∣∣f ce(st+1)−mt∣∣, where f ce extracts the agent’s location from state st, and
∣∣f ce(st)−mt∣∣ is the distance between the agent’s location and the meta-command’s location at time step t. Intuitively, the intrinsic rewards are adopted to guide the agent to reach L of the meta-command and stay at L to do some event E. The extrinsic rewards are adopted to guide the agent to perform optimal actions to reach L and do optimal event E∗ at L. Overall, the optimization objective is maximizing the expectation over extrinsic and intrinsic discounted total rewards Gt = Es∼dπθ ,a∼πθ [∑∞ i=0 γ irt+i + α ∑Tmc j=0 γ jrintt+j ] , where
dπ(s) = limt→∞ P ( st = s | s0, π ) is the probability when following π for t time steps from s0. We use α to weigh the intrinsic and extrinsic rewards.
After training the CEN and the MCCAN, we can achieve HAC by simply setting an agent to randomly select a meta-command derived from humans to execute. However, such collaboration is nonintelligent and can even be a disaster for game victory because agents have no mechanism to model meta-commands’ values and cannot choose the optimal meta-command to execute. While humans usually choose the optimal one based on their value systems for achieving effective collaboration to win the game. Thus, we propose a meta-command value estimator to model the agent’s value systems for meta-commands, as described in the following subsection.
3.3 META-COMMAND SELECTOR
In MOBA games, the same macro-strategy often has different values for different humans in different situations. For example, a macro-strategy can be Come And Kill The Dragon, as shown in Figure 1(b). It is more valuable for humans A and B and agent D to collaborate. However, another macro-strategy Clean Up Top-Lane Minions is more valuable for human C and agent E than the others. Therefore, agents must select the most valuable meta-command from the received meta-command candidate set C to achieve effective human-agent collaboration. We propose a meta-command value estimator, i.e., the Meta-Command Selector (CS) πω(o, C), to estimate the values of all received meta-commands and select the most valuable one for each agent to execute.
CS Optimization Objective. Typically, executing a meta-command involves reaching location L and doing event E, of which the latter is more important to the value of the meta-command. For example, for the meta-command Come And Kill The Dragon, if the event Kill The Dragon cannot be done within Tmc time steps, then it is pointless to Come To The Dragon. Thus, the optimization objective of CS is to select the optimal meta-command m∗t = πω(ot, Ct) for each agent to maximize the expected discounted meta-command execution return Gmct ,
Gmct = Es∼dπθ ,m∼πω,a∼πθ ∞∑ i=0 γimcR mc t+i·Tmc , Rmct = TL∑ i=0
rt+i︸ ︷︷ ︸ (I) + β
Tmc∑ j=TL rt+j
︸ ︷︷ ︸ (II) ,
where ot ∈ O, Ct is the meta-command candidate set in state st, γmc ∈ [0, 1) is the discount factor, and Rmct is a generalized meta-command execution reward function. For R mc t , (I) and (II) are the total extrinsic rewards r before reaching location L and doing event E, respectively. TL ≤ Tmc is the time for reaching L, and β > 1 is a trade-off parameter representing the relative importance of E.
CS Training Process. We construct a self-play training environment for CS, where agents can send messages to each other, as shown in Figure 3(a)(III). Specifically, three tricks are adopted to increase the sample efficiency while ensuring efficient exploration. First, each meta-command m is sampled with the argmax rule from the results outputted by the pre-trained CEN. Second, each agent sends its meta-command with a probability p every Tmc time steps. Finally, each agent selects the final meta-command c sampled with the softmax rule from its CS output results and hands it over to the pre-trained MCCAN for execution. We use the multi-head value mechanism (Ye et al., 2020a) to model the value of the meta-command execution, which can be formulated as:
LV (ω) = EO,C ∑ headk ∥Gmck − V kω (O,C)∥2 , where V kω (S,C) is the value of the k-th head. For DQN-based methods (Mnih et al., 2015; Van Hasselt et al., 2016; Wang et al., 2016), the Q loss is:
LQ(ω) = EO,C,M [ ∥Gtotal −Qkω(O,C,M)∥2 ] , Gtotal = ∑ headk wkG mc k ,
where wk is the weight of the k-th head and Gmck is the Temporal Difference (TD) estimated value error Rmck + γmcV k ω (O
′, C ′)− V kω (O,C). CS Model Structure. We design a general network structure for CS, as shown in Figure 3(b). In MOBA games, the meta-commands in adjacent regions have similar values. Thus, we divide the meta-commands in the map into grids, a common location description for MOBA games, and use the shared Convolutional Neural Network (CNN) to extract region-related information to improve the generalization of CS to adjacent meta-commands. Then, the map embeddings of all received meta-commands are integrated into a map set embedding by max-pooling. Besides, we use the gating mechanism (Liu et al., 2021) to fuse the map set embedding and the state embedding of the observation information. Finally, to directly construct the relationship between the observation information and each meta-command, we introduce a target attention module, where the query is the fused embedding and the key is the map embedding of each meta-command. The fused embedding is fed into the subsequent state-action value network Q(o, C,m) and state value network V (o, C) of CS. In this way, we can also easily convert the state-action value network Q(o, C,m) to the policy network π(m|o, C). Thus, the CS model structure can be easily applied to the most popular RL algorithms, such as PPO (Schulman et al., 2017), DQN (Mnih et al., 2015), etc.
4 EXPERIMENTS
In this section, we evaluate the proposed MCC framework by performing both agent-only and humanagent experiments in Honor of Kings. All experiments were conducted in the 5v5 mode with a full hero pool (over 100 heroes, see Appendix A.4).
4.1 EXPERIMENTAL SETUP
Due to the complexity of MOBA games and limited resources, we train the CEN, the MCCAN, and the CS sequentially instead of training the MCC framework jointly. Specifically, we first train the CEN via SL until it converges for 26 hours using 8 NVIDIA P40 GPUs. The batch size of each GPU is set to 512. Then, we train the MCCAN by fine-tuning the pre-trained WuKong model (Ye et al., 2020a) conditioned on the meta-command sampled from the pre-trained CEN. The MCCAN is trained until it converges for 48 hours using a physical computer cluster with 63,000 CPUs and 560 NVIDIA V100 GPUs. The batch size of each GPU is set to 256. The parameter α is set to 16. After that, we train the CS via self-play until it converges for 24 hours using a physical computer cluster with 70,000 CPUs and 680 NVIDIA V100 GPUs. The batch size of each GPU is set to 256. The parameter β is set to 2. Each agent sends a meta-command with a probability p of 0.8 and an interval Tmc of 20s, as shown in Figure 4(a). For the entire training process of the MCC framework, the location L of meta-commands in the game map is divided into 144 grids, and the time limit Tmc for the meta-command execution is set to 20s. Finally, we obtain the trained MCC agent that can receive meta-commands from other agents and humans and select the most valuable one to execute.
To evaluate the performance of the MCC agent, we conduct both agent-only and human-agent experiments and compare the MCC agent with three different types of agents: the MC-Base agent (agent only executes its own meta-command without communication), the MC-Rand agent (agent randomly selects a meta-command to execute), and the MC-Rule agent (agent selects the nearest meta-command to execute). Note that the MC-Base agent can be considered as a State-Of-The-Art (SOTA) in Honor of Kings since it maintains the same capabilities as the WuKong agent (Ye et al., 2020a) (Appendix B.6.3). Results are reported over five random seeds.
4.2 AGENT-ONLY COLLABORATION
Directly evaluating agents with humans is expensive, which is not conducive to model selection and iteration. Instead, we built two agent-only testing environments, Test I and Test II, to evaluate agents, as shown in Figure 4(b). Test I is a complex environment where all agent teammates can send and receive meta-commands simultaneously with an interval of 20s. Test I evaluates the agents’ performance under complex situations. Test II is a simple environment to simulate most practical game scenarios, where at most one human can send his/her macro-strategy at a time step. Thus, in Test II, only one agent is randomly selected to send its meta-command with an interval of 20s, and the other agents only receive meta-commands. See the detailed experimental results of the CEN and MCCAN in Appendixes B.6.2 and B.6.3, respectively.
Finding 1: MCC outperforms all baselines.
We first evaluate the capabilities of the MCC agent and baselines to examine the effectiveness of CS. Figure 5(a) and (b) show the win rates (WRs) of four types of agent teams who play against each other for 600 matches in Test I and Test II, respectively. Figure 5(c) demonstrates the final Elo scores (Coulom, 2008) of these agents. We see that the MCC agent team significantly outperforms the MC-Rand and MC-Rule agent teams. This indicates that compared to selecting meta-commands randomly or by specified rules, the CS can select valuable meta-commands for agents to execute, resulting in effective collaboration. Such collaboration manners in the MCC agent team can even be conducive to winning the game, bringing about 10% WR improvement against the MC-Base agent team. On the contrary, the unreasonable collaboration manners in the MC-Rand and MC-Rule agent teams can hurt performance, leading to significant decreases in the WR against the MC-Base agent team. Note that the MC-Base agent has the same capabilities as the WuKong agent (Ye et al., 2020a), the SOTA in Honor of Kings. Overall, the MCC agent achieves the highest Elo scores compared to all baselines in both testing environments, validating the effectiveness of CS. Notably, we also find that the WRs of the MCC agent in Test I and Test II are close, suggesting that the MCC agent can generalize to different numbers of meta-commands. We also investigate the influence of different components, including CNN feature extraction with the gating mechanism (Liu et al., 2021), target attention module, and optimization algorithms on the performance of CS (Appendix B.6.4).
4.3 HUMAN-AGENT COLLABORATION
In this section, we conduct an online experiment to evaluate the MCC agent and baselines in collaborating with humans, as shown in Figure 4(c). We contacted the game provider and got a
test authorization. The game provider helped us recruit 30 experienced participants with personal information stripped, including 15 high-level (top1%) and 15 general-level (top30%) participants. We used a within-participant design: m Human + n Agent (mH + nA) team mode to evaluate the performance of agents teaming up with different numbers of participants, where m+ n = 5. This design allowed us to evaluate both objective performances as well as subjective preferences.
All participants read detailed guidelines and provided informed consent before the testing. Participants tested 20 matches for the 1H + 4A team mode. High-level participants tested additional 10 matches for the 2H + 3A and the 3H + 2A team modes, respectively. After each test, participants reported their preference over the agent teammates. For fair comparisons, participants were not told the type of their agent teammates. The MC-Base agent team was adopted as the fixed opponent for all tests. To eliminate the effects of collaboration between agents, we prohibit communication between agents. Thus the agents can only communicate with their human teammates. See Appendix C for additional experimental details, including experimental design, result analysis, and ethical review.
Finding 1: Human-MCC team achieves the highest WR across team modes and human levels.
We first compare the human-agent team objective performance metrics supported by the MCC agent and baselines, as shown in Table 1. We see that the human-MCC team significantly outperforms all other human-agent teams across different team modes and human levels. This indicates that the MCC agent can generalize to different levels and numbers of human teammates. Note that the SOTA agent can easily beat the high-level human players (Nair, 2019; Chen, 2021). So as the number of participants increases, the WRs of all human-agent teams decrease. Surprisingly, the WR increased significantly when the participants teamed up with the MCC agent. We suspect that the human-MCC team has also achieved effective communication and collaboration on macro-strategies. To verify this, we count the Response Rates (RRs) of agents and participants to the meta-commands sent from their teammates, as shown in Table 2. We find that the RRs of the MCC agents to high-level participants (73.05%) and the high-level participants to the MCC agents (78.5%) are close to the RR of high-level participants themselves (74.91%). This suggests that the CS is close to the value system of high-level humans. Besides, the RRs of participants to the MCC agents (73.43% and 78.5%) are higher than those of the MC-Rand agents (41.07% and 35.69%), indicating that participants collaborated with the MCC agents more often and more effectively.
Finding 2: Participants prefer MCC over all baselines.
We then compare the subjective preference metrics, i.e., the Reasonableness of H2A, the Reasonableness of A2H, and the Overall Preference, reported by participants over their agent teammates, as
shown in Figure 6. Participants believed that the MCC agent responded more reasonably and gave the highest score in the Reasonableness of the H2A metric (Figure 6(a)). Besides, participants also believed that the meta-commands sent by the MCC agent are more aligned with their own value system and rated the MCC agent much better than the MC-Rand agent in the Reasonableness of A2H metric (Figure 6(b)). In general, participants were satisfied with the MCC agent over the other agents and gave the highest score in the Overall Preference metric (Figure 6(c)). The results of these subjective preference metrics are also consistent with the results of objective performance metrics.
4.4 COLLABORATIVE INTERPRETABILITY ANALYSIS
To better understand how the MCC agents and humans can collaborate effectively and interpretably. We visualize the comparison of CS and high-level participants’ value systems on a game scene with three meta-commands existing, as shown in Figure 7. We see that the CS selects the meta-command B for the two heroes in the red dashed box to collaborate, selects the meta-command C for the two heroes in the purple dashed box to collaborate, and selects the meta-command A for the remaining hero to execute alone. The CS selection results are consistent with the ranking results of high-level participants, confirming the effectiveness of the collaboration behaviors between the MCC agents and humans. Such collaboration enhances the human interpretability of agent behavior.
5 CONCLUSION
In this work, we proposed an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, to achieve effective human-agent collaboration in MOBA games. To bridge the communication gap between humans and agents, we designed the Meta-Command - a common ground between humans and agents for bidirectional communication. To achieve effective collaboration, we constructed the Meta-Command Selector - a value estimator for agents to select valuable meta-commands to collaborate with humans. Experimental results in Honor of Kings demonstrate that MCC significantly outperforms all baselines in both objective performance and subjective preferences, and it could even generalize to different levels and numbers of humans.
A ENVIRONMENT DETAILS
A.1 GAME INTRODUCTION
Honor of Kings is one of the most popular MOBA games worldwide. The gameplay is to divide ten players into two camps to compete on the same symmetrical map. Players of each camp compete for resources through online confrontation, team collaboration, etc., and finally win the game by destroying the enemy’s crystal. The behaviors performed by players in the game can be divided into two categories: macro-strategies and micro-operations. Macro-strategy is long-distance scheduling or collaborating with teammates for quick resource competition, such as long-distance support for teammates, collaborating to compete for monster resources, etc. Micro-operation is the real-time behavior adopted by each player in various scenarios, such as skill combo release, evading enemy skills, etc. Complicated game maps, diverse hero combinations, diverse equipment combinations, and diverse player tactics make MOBA games extremely complex and exploratory.
A.2 GAME ENVIRONMENT
Figure 8 shows the UI interface of Honor of Kings. For fair comparisons, all experiments in this paper were carried out using a fixed released game engine version (Version 3.73 series) of Honor of Kings.
A.3 IN-GAME SIGNALING SYSTEM
Figure 9 demonstrates the in-game signaling system of Honor of Kings. Players can communicate and collaborate with teammates through the in-game signaling system. In the Human-Agent Collaboration Test, humans can send macro-strategies to agents through signals like A in Figure 9, and these signals are displayed to teammates in the form of D. The MCC framework converts these explicit messages, i.e., signals, into meta-commands by the hand-crafted command converter function f cc and broadcasts them to all agent teammates. Moreover, the MCC framework can also convert the meta-commands sent by agents into signals by the inverse of f cc and broadcast them to all human teammates.
Voice (B.2) and text (B.1 and B.3) are two other forms of communication. In the future, we will consider introducing a general meta-command encoding model that can handle all forms of explicit messages (signals, voice, and text).
A.4 HERO POOL
Table 3 shows the full hero pool and 20 hero pool used in Experiments. Each match involves two camps playing against each other, and each camp consists of five randomly picked heroes.
B FRAMEWORK DETAILS
B.1 INFRASTRUCTURE DESIGN
Figure 10 shows the infrastructure of the training system (Ye et al., 2020a), which consists of four key components: AI Server, Inference Server, RL Learner, and Memory Pool. The AI Server (the actor) covers the interaction logic between the agents and the environment. The Inference Server is used for the centralized batch inference on the GPU side. The RL Learner (the learner) is a distributed training environment for RL models. And the Memory Pool is used for storing the experience, implemented as a memory-efficient circular queue.
Training complex game AI systems often require a large number of computing resources, such as AlphaGo Lee Sedol (280 GPUs), OpenAI Five Final (1920 GPUs), and AlphaStar Final (3072 TPUv3
cores). We also use hundreds of GPUs for training the agents. A worthy research direction is to improve resource utilization using fewer computing resources.
B.2 REWARD DESIGN
Table 4 demonstrates the details of the designed environment reward.
Table 4: The details of the environment reward.
Head Reward Item Weight Type Description
Farming Related
Gold 0.005 Dense The gold gained. Experience 0.001 Dense The experience gained. Mana 0.05 Dense The rate of mana (to the fourth power). No-op -0.00001 Dense Stop and do nothing. Attack monster 0.1 Sparse Attack monster.
KDA Related
Kill 1 Sparse Kill a enemy hero. Death -1 Sparse Being killed. Assist 1 Sparse Assists. Tyrant buff 1 Sparse Get buff of killing tyrant, dark tyrant, storm tyrant. Overlord buff 1.5 Sparse Get buff of killing the overlord. Expose invisible enemy 0.3 Sparse Get visions of enemy heroes. Last hit 0.2 Sparse Last hitting an enemy minion.
Damage Related Health point 3 Dense The health point of the hero (to the fourth power).Hurt to hero 0.3 Sparse Attack enemy heroes.
Pushing Related Attack turrets 1 Sparse Attack turrets.Attack crystal 1 Sparse Attack enemy home base.
Win/Lose Related Destroy home base 2.5 Sparse Destroy enemy home base.
B.3 FEATURE DESIGN
B.3.1 CEN
See Table 5.
B.3.2 MCCAN
See Table 6.
B.3.3 CS
See Table 7.
B.4 AGENT ACTION
Table 8 shows the action space of agents.
B.5 NETWORK ARCHITECTURE
B.5.1 CEN
Figure 11 shows the detailed model structure of CEN. The CEN network extracts game stats features and serval unit features from the observation o. Each unit feature is encoded by shared MLP layers to obtain serval unit embeddings, and the game stats features are encoded by MLP layers to obtain the game stats embedding. Finally, we concatenate the unit embeddings and the game stats embedding, and output the probability distribution of the meta-commands. The outputted meta-command indicates the macro-strategy for future Tmc time steps.
B.5.2 MCCAN
Figure 12 shows the detailed model structure of MCCAN. The MCCAN predicts a sequence of actions for each agent based on its observation and the meta-command sampled from the Top-k Softmax distribution of CEN. The observations are processed by a deep LSTM, which maintains memory among steps. We use the target attention mechanism to improve the model prediction accuracy, and we design the action mask module to eliminate unnecessary actions for efficient exploration. To manage the uncertain value of state-action in the game, we introduce the multi-head value estimation (Ye et al., 2020a) into the MCCAN by grouping the extrinsic rewards in Table 4. We also treat the intrinsic rewards as a value head and estimate it using a separate value network. Besides, we introduce a value mixer module (Rashid et al., 2018) to model team value to improve the accuracy of the value estimation. All value networks consist of an FC layer with LSTM outputs as input and output a single value, respectively. Finally, following Ye et al. (2020a) and Gao et al. (2021), we adopt hierarchical heads of actions, including three parts: 1) What action to take; 2) who to target; 3) how to act.
Network Parameter Details. We develop 6 channels of spatial features read from the game engine with resolution 6*17*17, and then use 5*5 and 3*3 CNN to sequentially extract features. The LSTM unit sizes are 1024 and the time steps are 16. The k value in the CEN sampling is set to 20.
B.5.3 CS
Figure 13 shows the detailed model structure of CS. For each agent, the CS predicts the value Q(o, C,m) or probability π(m|o, C) of each meta-command m based on its observations o and all received meta-commands C. First, all received meta-commands (mH from human and m from
the CEN) is reshaped to a 2D image (12*12*1). Then, we use a shared CNN to extract the regionrelated information from meta-commands, i.e., zHm =CNN(m
H) and zm =CNN(m). Besides, the map embeddings of all received meta-commands are integrated into a map set embedding by max-pooling, i.e., zms =Max-pooling(zHm , zm). After that, we use the Gating Mechanism to fuse the map set embedding and the state embedding zo =MLP(o) of the observation information o. For the Gating Mechanism, we use the state embedding zo and map set embedding zms to calculate the attention to each other, that is, the gating gms =MLP(zo) for zms and the gating go =MLP(zms) for zo. Gating will be used to retain attentional information, as well as fused them, i.e. zf = concat[zo · go + zo; zms · gms + zms]. Finally, the fused embedding zf and map embeds {zHm , zm} will be used as key kf and query {qHm , qm}, respectively, and input into the Target Attention module to predict the state-action value (Q-value) or the probability. We also use the fused embedding to estimate the state value (V -value) from observations o and all meta-commands C.
B.6 DETAILS AND ANALYSIS OF COMPONENTS
B.6.1 META-COMMAND
Command Converter Function f cc. The f cc extracts the corresponding location and event information from human messages. Specifically, the game engine converts rough messages from the in-game signaling system into predefined protocol buffers (PB), and then the MCC framework uses the f cc to extract the location and event information from the received PB to construct the corresponding meta-command.
Command Extraction Function f ce. The f ce directly extracts the agent’s location from the current state (the position feature in Table 6), which is used to obtain the intrinsic rewards by calculating the "distance" from the agent to the currently executed meta-command.
B.6.2 CEN
Training Data. We extract meta-commands from expert game replay authorized by the game provider, which consist of high-level (top 1% player) license game data without identity information. The input features of CEN are shown in Table 5. The game replay consists of multiple frames, and the information of each frame is shown in Figure 8. We divide the location L of meta-commands in the map into 144 grids. And we set the event E to be all units (103 in total) in the game. For setting Tmc, we counted the player’s completion time for meta-commands from expert game replay, and the
results are shown in Figure. 14. We can see that 80% meta-commands can be completed within the time of 20 seconds in Honor of Kings. Thus, Tmc is set to 300 time steps (20 seconds).
Given a state st in the trajectory, we first extract the observation ot for each hero. Then, we use a hand-crafted command extraction function f ce to extract the meta-command mt = f ce(st+Tmc) corresponding to the future state st+Tmc . By setting up labels in this way, we expect the CEN πϕ(m|o) to learn the mapping from the observation ot to its corresponding meta-command mt. The detailed training data extraction process is as follows:
• First, extract the trajectory τ = (s0, . . . , st, . . . , st+Tmc , . . . , sN ) from the game replay, where N is the total number of frames.
• Second, randomly sample some frames {t|t ∈ {0, 1, . . . , N}} from the trajectory τ . • Third, extract feature ot from state st for each frame {t|t ∈ {0, 1, . . . , N}}. • Fourth, extract the location Lt and the event Et from the state st+Tmc in frame t+ Tmc. • Fifth, use Lt and Et to construct the label mt =< Lt, Et, Tmc >. • Finally, < ot,mt > is formed into a pair as a sample in the training data.
Optimization Objective. After obtaining the dataset {< o,m >}, we train the CEN πϕ(m|o) via supervised learning (SL). Due to the imbalance of samples at different locations of the metacommands, we use the focal loss (Lin et al., 2017) to alleviate this problem. Thus, the optimization objective is:
LSL(ϕ) = EO,M [ −αm(1− πϕ(o))γ log(πϕ(o))− (1− α)(1−m)πϕ(o)γ log(1− πϕ(o)) ] ,
where α = 0.75 is the balanced weighting factor for positive class (m = 1) and γ = 2 is the tunable focusing parameter. Adam (Kingma & Ba, 2014) is adopted as the optimizer with an initial learning rate of 0.0001. Especially, We compute the focal loss for L and E, respectively.
Experimental Results. Figure 15 shows the meta-command distributions of the initial CEN, the converged CEN, and high-level humans. We see that the meta-commands predicted by the CEN gradually converge from chaos to the meta-commands with important positions. The Kullback-Leibler (KL) divergence of the meta-command distribution between the CEN and high-level humans decreases from 4.96 to 0.44 as training converges. The distribution of the converged CEN in Figure 15(b) is close to the distribution of high-level humans in Figure 15(c), suggesting that the CEN can simulate the generation of human meta-commands in real games.
B.6.3 MCCAN
Training Environment. We use a similar training environment as the WuKong agent and the OpenAI Five agent, i.e., the agent interacts with the environment in a self-play manner. Specifically, for
the MCCAN training, each of the 10 agents always executes its own meta-command generated by the CEN. And for each agent, every Tmc time steps, the pre-trained CEN πϕ(m|o) generates a meta-command mt based on its observation ot. The meta-command mt is then kept for Tmc time steps and sent to the MCCAN continuously. During the interval [t, t+ Tmc], the MCCAN predicts a sequence of actions based on ot and mt for the agent to perform.
Optimization Objective. The MCCAN is trained with the goal of achieving a higher completion rate for the meta-commands generated by the pre-trained CEN while ensuring that the win rate is not reduced. To achieve this, we introduce extrinsic rewards (including individual and team rewards) and intrinsic rewards
rintt (st,mt, st+1) = ∣∣f ce(st)−mt∣∣−∣∣f ce(st+1)−mt∣∣ ,
where f ce extracts the agent’s location from state st, and ∣∣f ce(st)−mt∣∣ is the distance between the agent’s location and the meta-command’s location in time step t. Intuitively, the intrinsic rewards are adopted to guide the agent to the location L of the meta-command and stay at L to do some event E. The extrinsic rewards are adopted to guide the agent to perform optimal actions to reach L and do the optimal event at L. Overall, the optimization objective is maximizing the expectation over extrinsic and intrinsic discounted total rewards Gt = Es∼dπθ ,a∼πθ [∑∞ i=0 γ irt+i + α ∑Tmc j=0 γ jrintt+j ] , where
dπ(s) = limt→∞ P ( st = s | s0, π ) is the probability when following π for t steps from s0. We use α to weigh the intrinsic rewards and the extrinsic rewards. The experimental results about the influence of α on the performance of the MCCAN are shown in Figure 16.
Training Process. The MCCAN is trained by fine-tuning a pre-trained WuKong model (Ye et al., 2020a) conditioned on the meta-command sampled from the pre-trained CEN. Note that the WuKong model is the State-Of-The-Art (SOTA) model in Honor of Kings, which can easily beat the high-level human players 1 2.
We also modified the Dual-clip PPO algorithm (Ye et al., 2020a) to introduce the meta-command m into the policy πθ(at|ot,mt) and the advantage estimation At = A(at, ot,mt). The Dual-clip PPO algorithm introduces another clipping parameter c to construct a lower bound for rt(θ) = πθ(at|ot,mt) πθold (at|ot,mt) when At < 0 and rt(θ) ≫ 0. Thus, the policy loss is:
Lπ(θ) = Es,m,a[max(cAt,min(clip(rt(θ), 1− τ, 1 + τ)At, rt(θ)At)],
where τ is the original clip parameter in PPO. And the multi-head value loss is: LV (θ) = Es,m[ ∑ headk (Gkt − V kθ (ot,mt))], Vtotal = ∑ headk wkV k θ (ot,mt),
where wk is the weight of the k-th head and V kt (ot,mt) is the k-th value. 1Wukong AI beats human players to win Honour Of Kings mobile game, https://www.thestar.co
m.my/tech/tech-news/2019/08/07/tencent039s-ai-beats-human-players-to-win -honour-of-kings-mobile-game
2Honor of Kings Wukong AI 3:1 defeated the human team, https://www.sportsbusinessjourna l.com/Esports/Sections/Technology/2021/07/HoK-AI-Battle.aspx?hl=KPL&sc=0
Experimental Results. We conducted experiments to explore the influence of the extrinsic and intrinsic reward trade-off parameter α on the performance of MCCAN. The win rate and completion rate results are shown in Figure 16. As α increases, the completion rate of MCCAN gradually increases, and the win rate of MCCAN first increases and then decreases rapidly. Notably, we can train an agent with a completion rate close to 100% by increasing α, but this will significantly reduce the win rate because the meta-command executed is not necessarily optimal and may result in the death of agents.
When α = 16, the completion rate of the trained agent for meta-commands is 82%, which is close to the completion rate of humans (80%). And the win rate of the trained agent against the SOTA agents (Ye et al., 2020a; Gao et al., 2021) is close to 50%. Thus, we finally set α = 16 .
B.6.4 CS
Additional Results in Agent-Only Collaboration Testing. To eliminate the transitivity of WR, we added additional experiments to compare the MCC agent and baseline agents to the WuKong (SOTA) agent for each 600 matches in Test I Environment. The WRs (P1 versus P2) are shown in Table 9. We see that the effective CS mechanism in the MCC agent enables the agents to communicate and collaborate effectively, which improves the WR compared to the WuKong agent (SOTA).
Ablation Studies. We further investigate the influence of different components on the performance of CS, including CNN feature extraction with the gating mechanism (w/o CNN-GM), target attention module (w/o TA), and PPO optimization algorithm (MCC-PPO). We conduct ablation studies in Test I with a 20 hero pool. In practical games, meta-commands with adjacent regions often have similar intentions and values. Thus the response rate of the agent to adjacent meta-commands should be as close as possible. Besides, the higher the agent’s response rate to meta-commands, the more collaborative the agent’s behaviors. Thus we expect the response rate of CS to be as high as possible. Generally, we expect CS’s Response Rate (RR) to be as high as possible while ensuring that the Win Rate (WR) maintains.
Figure 17(a) demonstrates the WR of different CS ablation versions during the training process, and Figure 17(b) shows the converged WR-RR results. We see that after ablating the TA module, the WR and RR of CS reduce significantly, indicating that the TA module can improve the accuracy of CS to meta-commands. Besides, after ablating the CNN-GM module, the RR of CS is most affected, which is reduced by 20%. It indicates that without the CNN-GM module, the value estimation of CS to adjacent meta-commands is not accurate enough, resulting in missing some highly valuable meta-commands. We notice that the MCC and MCC-PPO in both metrics are close, confirming the versatility of the CS model structure.
C DETAILS OF HUMAN-AGENT COLLABORATION TEST
C.1 ETHICAL REVIEW
The ethics committee of a third-party organization, Tencent (the parent company of Honor of Kings), conducted an ethical review of our project. They reviewed our experimental procedures and risk avoidance methods (see Appendix C.1.2). They believed that our project complies with the "New Generation of AI Ethics Code" 3 of the country to which the participants belonged (China), so they approved our study. In addition, all participants consented to the experiment and provided informed consent (see Appendix C.1.1) for the study.
C.1.1 INFORMED CONSENT
All participants were told the following experiment guidelines before testing:
• This experiment is to study human-agent collaboration technology in MOBA games.
• Your identity information will not be disclosed to anyone.
• All game statistics are only used for academic research.
• You will be invited into matches where your opponents and teammates are agents.
• Your goal is to win the game as much as possible by collaborating with agent teammates.
• You can communicate and collaborate with agent teammates through the in-game signaling system.
• Agent teammates will also send you signals representing their macro-strategies, and you can judge whether to execute them based on your value system.
• After each test, you can report your preference over the agent teammates.
• Each game lasts 10-20 minutes.
• You may voluntarily choose whether to take the test. You can terminate the test at any time if you feel unwell during the test.
• After all tests are complete, you may voluntarily fill out a debrief questionnaire to tell us your open-ended feedback on the experiment.
• At any time, if you want to delete your data, you can contact the game provider directly to delete it.
If participants volunteer to take the test, they will first provide written informed consent, then we will provide them with the equipment and game account, and the test will begin.
3China: MOST issues New Generation of AI Ethics Code, https://www.dataguidance.com/n ews/china-most-issues-new-generation-ai-ethics-code
C.1.2 POTENTIAL PARTICIPANT RISKS
First, we analyze the risks of this experiment to the participants. The potential participant risks of the experiment mainly include the leakage of identity information and the time cost. And we have taken a series of measures to minimize these risks.
Identity Information. A series of measures have been taken to avoid this risk:
• All participants will be recruited with the help of a third party (the game provider of Honor of Kings), and we do not have access to participants’ identities.
• We make a risk statement for participants and sign an identity information confidentiality agreement under the supervision of a third party.
• We only use unidentifiable game statistics in our research, which are obtained from third parties. • Special equipment and game accounts are provided to the participants to prevent equipment and
account information leakage. • The identity information of all participants is not disclosed to the public.
Time Cost. We will pay participants to compensate for their time costs. Participants receive $5 at the end of each game test, and the winner will receive an additional $2. Each game test takes approximately 10 to 20 minutes, and participants can get about an average of $20 an hour.
C.2 EXPERIMENTAL DETAILS
C.2.1 PARTICIPANT DETAILS
We contacted the game provider and got a test authorization. The game provider helped us recruit 30 experienced participants with personal information stripped, including 15 high-level (top1%) and 15 general-level (top30%) participants. All participants have more than three years of experience in Honor of Kings and promise to be familiar with all mechanics in the game, including the in-game signaling system in Figure 9.
And special equipment and game accounts are provided to each participant to prevent equipment and account information leakage. The game statistics we collect are only for experimental purposes and are not disclosed to the public.
C.2.2 EXPERIMENTAL DESIGN
We used a within-participant design: m Human + n Agent (mH + nA) team mode to evaluate the performance of agents teaming up with different numbers of participants, where m+ n = 5. Each participant is asked to randomly team up with three different types of agents, including the MC-Base agents, the MC-Rand agents, and the MCC agents. For fair comparisons, participants were not told the type of their agent teammates. The MC-Base agent team was adopted as the fixed opponent for all tests. Participants tested 20 matches for the 1H + 4A team mode. High-level participants tested additional 10 matches for the 2H + 3A and the 3H + 2A team modes, respectively. After each game test, participants reported their preference over the agent teammates.
We prohibit communication between agents to eliminate the effects of collaboration. Thus the agents can only communicate with their human teammates. In each game test, humans can send the converted meta-commands whenever they think their macro-strategies are important. Furthermore, to make the agent behave like humans (at most one human sends his/her meta-command at a time step), we restricted agents from sending their meta-commands, i.e., only one agent sends a valuable meta-command to human teammates from the agents’ current meta-commands. Specifically, this process consists of several steps: (1) The MCC framework randomly chooses a human teammate (note that human and agent observations are shared); (2) The MCC framework uses his/her observation and all agents’ meta-commands as the input of the CS, and obtained the estimated values of these meta-commands; (3) The MCC framework selects the meta-command with the highest value; (4) The MCC framework selects the corresponding agent to send the meta-command. The above process is executed at an interval of 20 seconds.
In addition, as mentioned in Ye et al. (2020a); Gao et al. (2021), the response time of agents is usually set to 193ms, including observation delay (133ms) and response delay (60ms). The average APM of
agents and top e-sport players are usually comparable (80.5 and 80.3, respectively). To make our test results more accurate, we adjusted the agents’ capability to match high-level humans’ performance by increasing the observation delay (from 133ms to 200ms) and response delay (from 60ms to 120 ms).
C.2.3 PREFERENCE DESCRIPTION
After each test, participants gave scores on several subjective preference metrics to evaluate their agent teammates, including the Reasonableness of H2A (how well agents respond to the meta-commands sent by participants), the Reasonableness of A2H (how reasonable the meta-commands sent by agents are), and the Overall Preference for agent teammates.
For each metric, we provide a detailed problem description and a description of the reference scale for the score. Participants rated their agent teammates based on how well their subjective feelings matched the descriptions in the test. The different metrics are described as follows:
• For the Reasonableness of H2A, "Do you think the agent teammates respond reasonably to your commands? Please evaluate the reasonableness according to the following scales."
1) Terrible: No response, or totally unreasonable. 2) Poor: Little response, or mostly unreasonable. 3) Normal: Response, but some unreasonable. 4) Good: Response, mostly reasonable. 5) Perfect: Response, and perfect.
• For the Reasonableness of A2H, "Do you think the commands sent by the agent teammates are reasonable? Please evaluate the reasonableness according to the following scales. Note that if you don’t receive any commands, please ignore this question."
1) Terrible: Totally unreasonable. 2) Poor: Low reasonable. 3) Normal: Some reasonable. 4) Good: Mostly reasonable. 5) Perfect: Totally reasonable.
• For the Overall Preference, "What is your overall preference for the agent teammates collaborating with you? Please rate the following words according to your subjective feelings: 1) Terrible; 2) Poor; 3) Normal; 4) Good; 5) Perfect".
C.2.4 ADDITIONAL SUBJECTIVE PREFERENCE RESULTS
Detailed subjective preference statistics are presented in Table 10. Compared with no collaboration (MC-Base) and random collaboration (MC-Rand), the participants preferred reasonable collaboration (MCC).
Reasonableness of H2A. Participants prefer MC-Rand over MC-Base, suggesting that they expect the agent to respond more to their commands. Nonetheless, the score of MCC is much higher than MC-Rand, indicating that participants prefer their commands to get reasonable rather than incorrect responses.
Reasonableness of A2H. Participants express a significant preference for MCC over MC-Rand, demonstrating that they agree with MCC’s commands and are more willing to collaborate. The results are consistent with Figure 7, where participants believe the commands sent from MCC align more with their own value system. Note that the non-collaborative setting prohibits MC-Base from sending any commands, so we made no statistics.
Overall Preference. Participants are satisfied with the MCC agent over other agents and give the highest score. By comparing MC-Base and MCC, we can observe that human-agent collaboration based on meta-command communication can bring a better impression of the human-agent team in the MOBA game. However, while MC-Rand is higher than MC-Base in the Reasonableness of H2A metric, it is lower than MC-Base in the Overall Preference metric. Therefore, collaboration is important but not as necessary as winning. This metric further confirms the reasonableness of MCC in human-agent collaboration.
D DISCUSSION
Limitations and future work. First, the training process of the MCC agent consumes vast computing resources like other SOTA MOBA agents. Thus, we will optimize the training process of existing MOBA agents, aiming to lower the threshold for researchers to study and reproduce work in MOBA games. Second, the meta-commands we proposed are generic to MOBA games and cannot be directly extended to other types of games, such as FPS and Massively Multiplayer Online (MMO). We will design a more general meta-command representation, such as natural language, and extend the MCC framework to other games. Third, we will apply the MCC agents to the friendly bots in the teaching mode of Honor of Kings. All in all, we would hope that this work can not only offer new ideas to researchers but also bring a better human-agent collaboration experience in practical applications. | 1. What is the main contribution of the paper regarding human-agent collaboration in MOBA games?
2. What are the strengths and weaknesses of the proposed Meta Command Communication (MCC) framework?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any questions or concerns regarding the paper's methodology, results, or conclusions? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This work investigates human-agent collaboration in Multiplayer Online Battle Arena (MOBA) games and presents a human interpretable communication framework through which humans and agents can communicate their respective strategies during the game. It introduces the concept of meta commands which acts as the common ground for communication of macro-strategies between the humans and the agents. Further, the framework includes a meta command selector which estimates the value of the meta commands and selects a reasonable meta command to follow. The results show that using such a Meta Command Communication (MCC) framework improves the overall team performance and beats the current SOTA in MOBA games.
On a broader level, MOBA games present an interesting set of challenges for Game AI, one of which is human-agent collaboration. This paper addresses this challenge and shows that having an interpretable communication framework between humans and agents can achieve effective human-agent collaboration. They evaluate the MCC agent on agent-only environments and on online experiments with humans of different levels.
Strengths And Weaknesses
Strengths/Interesting aspects
Communication through a human-interpretable common ground: The MCC framework establishes a human interpretable communication framework where irrespective of what the agent’s internal representations are, the communication happens in a symbolic form which is interpretable to humans in the loop. This also inclines with having symbols as the lingua-franca [1] with the meta-commands being the symbolic information that is used as a common ground in human-agent interaction. The winrates from the online experiments show the effectiveness of the MCC framework with different types of agents and levels.
Human subject study for the performance of MCC teams: Even though the results show that as more humans are present in the team the win rate reduces against an all-agent SOTA method, in comparison to the other methods, human teams with MCC agents improve the overall team performance. Further, the response rates are also high in MCC-human teams.
Comparison of Command selector and high-level participant’s value systems - Although it doesn’t give a complete picture, it provides a preliminary insight into how the CS meta command selections are consistent with the ranking results
Weaknesses/Clarifications
Meta command is defined as a three element tuple where the event E (what to do ) is also part of it. The paper states the following “To achieve the former, a hand-crafted command converter function f cc is used to generate L of meta-commands by extracting the location from explicit messages, such as signals, sent by humans. To achieve the latter, we use a Command Encoding Network (CEN) πϕ(m|o) to generate L of meta-commands.” It is not clear as to how the event E in a meta command is incorporated (if needed) in the meta command execution if in the command conversion stage the output is just the locations extracted/generated from the explicit human commands/observations. Clarification on this aspect should be provided in the paper.
As much as it extends the SOTA in Honor of Kings, the common ground here is game-specific. The meta commands seem to be specifically designed for Honor of Kings. It is not clear if the meta commands or techniques are generic to other MOBA games either.
[1] Kambhampati, Subbarao, et al. "Symbols as a lingua franca for bridging human-ai chasm for explainable and advisable ai systems." Proceedings of the AAAI Conference on Artificial Intelligence. Vol. 36. No. 11. 2022.
Clarity, Quality, Novelty And Reproducibility
Clarity: The description of the meta command framework is clear. The details of the compute requirements for the experiments, training time, batch size, and important hyperparameters are provided. They also point out the tricks used to enhance the training process of the command selector.
Quality: Although the idea of having a common ground in human-agent collaboration might not be new, its instantiation in MOBA games is innovative. The evaluations (both on agent-only and human subject teams) are well described even though reproducing the results might be a hassle due to the extensive computational requirements. Analysis on objective performance and subjective preferences was provided. The paper also addresses the potential limitations of the work and provides additional insights through ablation studies.
Novelty: This work, albeit being fiercely application-oriented, is novel in the sense that it shows the effectiveness of establishing a common ground in human-agent interactions instantiated in MOBA games. Even though the common grounding and techniques are specific to MOBA games, it could potentially inspire people in the AI community to have a human-interpretable communication framework for human-agent collaboration.
Reproducibility: Given the extensive use of compute resources, it might be difficult for many researchers to reproduce the results. Along with that, additional details on some of the blocks in each of the model structures (like the MLPs in CEN model structure or the LSTMs, CNNs in MCCAN structure) seem to be missing in order to replicate the results. |
ICLR | Title
Towards Effective and Interpretable Human-Agent Collaboration in MOBA Games: A Communication Perspective
Abstract
MOBA games, e.g., Dota2 and Honor of Kings, have been actively used as the testbed for the recent AI research on games, and various AI systems have been developed at the human level so far. However, these AI systems mainly focus on how to compete with humans, less on exploring how to collaborate with humans. To this end, this paper makes the first attempt to investigate human-agent collaboration in MOBA games. In this paper, we propose to enable humans and agents to collaborate through explicit communication by designing an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, for accomplishing effective human-agent collaboration in MOBA games. The MCC framework consists of two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, to bridge the communication gap between humans and agents; 2) a meta-command value estimator, i.e., the Meta-Command Selector, to select a valuable meta-command for each agent to achieve effective human-agent collaboration. Experimental results in Honor of Kings demonstrate that MCC agents can collaborate reasonably well with human teammates and even generalize to collaborate with different levels and numbers of human teammates. Videos are available at https://sites.google.com/view/mcc-demo.
1 INTRODUCTION
Games, as the microcosm of real-world problems, have been widely used as testbeds to evaluate the performance of Artificial Intelligence (AI) techniques for decades. Recently, many researchers focus on developing various human-level AI systems for complex games, such as board games like Go (Silver et al., 2016; 2017), Real-Time Strategy (RTS) games like StarCraft 2 (Vinyals et al., 2019), and Multi-player Online Battle Arena (MOBA) games like Dota 2 (OpenAI et al., 2019). However, these AI systems mainly focus on how to compete instead of collaborating with humans, leaving Human-Agent Collaboration (HAC) in complex environments still to be investigated. In this paper, we study the HAC problem in complex MOBA games (Silva & Chaimowicz, 2017), which is characterized by multi-agent cooperation and competition mechanisms, long time horizons, enormous state-action spaces (1020000), and imperfect information (OpenAI et al., 2019; Ye et al., 2020a).
HAC requires the agent to collaborate reasonably with various human partners (Dafoe et al., 2020). One straightforward approach is to improve the generalization of agents, that is, to collaborate with a sufficiently diverse population of teammates during training. Recently, some population-based methods proposed to improve the generalization of agents by constructing a diverse population of partners in different ways, succeeding in video games (Jaderberg et al., 2017; 2019; Carroll et al., 2019; Strouse et al., 2021) and card games (Hu et al., 2020; Andrei et al., 2021). Furthermore, to better evaluate HAC agents, several objective as well as subjective metrics have been proposed (Du et al., 2020; Siu et al., 2021; McKee et al., 2022). However, the policy space in complex MOBA
*These authors contributed equally to this work.
games is enormous (Gao et al., 2021) and requires massive computing resources to build a sufficiently diverse population of agents, posing a big obstacle to the scalability of these methods.
The communication ability to explicitly share information with others is important for agents to collaborate effectively with humans (Dafoe et al., 2020). In Multi-Agent Reinforcement Learning (MARL), communication is often used to improve inter-agent collaboration. Previous work (Sukhbaatar et al., 2016; Foerster et al., 2016; Lazaridou et al., 2016; Peng et al., 2017; Mordatch & Abbeel, 2018; Singh et al., 2018; Das et al., 2019; Wang et al., 2020) mainly focused on exploring communication protocols between multiple agents. Other work (Ghavamzadeh & Mahadevan, 2004; Jiang & Lu, 2018; Kim et al., 2019) proposed to model the value of multi-agent communication for effective collaboration. However, these methods all model communication in latent spaces without considering the human-interpretable common ground (Clark & Brennan, 1991; Stalnaker, 2002) or lingua franca Kambhampati et al. (2022), making themselves less interpretable to humans. Explicit communication dominated by natural language is often considered in human-robot interaction (Kartoun et al., 2010; Liu et al., 2019; Shafti et al., 2020; Gupta et al., 2021). However, these studies are mainly limited to collaboration between a robot and a human through one-way communication, i.e., humans give robots orders. Therefore, there is still a large room to study RL with the participation of humans.
Success in MOBA games requires subtle individual micro-operations and excellent communication and collaboration among teammates on macro-strategies, i.e., long-term intentions (Wu, 2019; Gao et al., 2021). The micro-operation ability of the existing State-Of-The-Art (SOTA) MOBA agents has exceeded the high-level (top 1%) humans (Ye et al., 2020a). However, these agents’ macro-strategies are deterministic and quite different from those of humans (Ye et al., 2020a). Moreover, all existing SOTA MOBA AI systems lack bridges for explicit communication between agents and humans on macro-strategies. These result in the agent’s behavior not being understood immediately by humans (Ye et al., 2020a) and not performing well when collaborating with humans (see Section 4.3).
To this end, we propose an efficient and interpretable Meta-Command Communication-based humanagent collaboration framework, dubbed MCC, to achieve effective HAC in MOBA games through explicit communication. First, we design an interpretable communication protocol, i.e., the MetaCommand, as a general representation of macro-strategies to bridge the communication gap between agents and humans. Both macro-strategies sent by humans and messages outputted by agents can be converted into unified meta-commands (see Figure 1(b)). Second, following Gao et al. (2021), we construct a hierarchical model that includes the command encoding network (macro-strategy layer) and the meta-command conditioned action network (micro-action layer), used for agents to generate and execute meta-commands, respectively. Third, we propose a meta-command value estimator, i.e., the Meta-Command Selector, to select the optimal meta-command for each agent to execute. The training process of the MCC agent consists of three phases. We first train the command encoding network to ensure that the agent learns the distribution of meta-commands sent by humans. Afterward, we train the meta-command conditioned action network to ensure that the agent learns to execute meta-commands. Finally, we train the meta-command selector to ensure that the agent learns to select the optimal meta-commands to execute. We train and evaluate the agent in Honor of Kings 5v5 mode with a full hero pool (over 100 heroes). Experimental results demonstrate the effectiveness of the MCC framework. In general, our contributions are as follows:
• To the best of our knowledge, we are the first to investigate the HAC problem in MOBA games. We propose the MCC framework to achieve effective HAC in MOBA games.
• We design the Meta-Command to bridge the communication gap between humans and agents. We also propose the Meta-Command Selector to model the agent’s value system for meta-commands.
• We introduce the training process of the MCC agent in a typical MOBA game Honor of Kings and evaluate it in practical human-agent collaboration tests. Experimental results show that the MCC agent can reasonably collaborate with different levels and numbers of human teammates.
2 BACKGROUND
2.1 MOBA GAMES
MOBA games have recently received much attention from researchers, especially Honor of Kings (Wu, 2019; Ye et al., 2020a;b;c; Gao et al., 2021), one of the most popular MOBA games worldwide. The gameplay is to divide ten players into two camps to compete on the same map. The game environment is shown in Figure 1(a). Each camp competes for resources through individual micro-operations (A1, A2) and team collaboration on macro-strategies (B), and finally wins the game by destroying the enemy’s crystal. Players can communicate and collaborate with teammates through the in-game signaling system. Particularly, players can send macro-strategies by dragging signal buttons (C1, C2) to the corresponding locations in the mini-map (D), and these signals display to teammates in the mini-map (E). See Appendix A for detailed game introductions.
2.2 HUMAN-AGENT COLLABORATION
We consider an interpretable communicative human-agent collaboration task, which can be extended from the Partially Observable Markov Decision Process (POMDP) and formulated as a tuple < N,H,S,AN ,AH ,O,M, r, P, γ >, where N and H represent the numbers of agents and humans, respectively. S is the space of global states. AN = {ANi }i=1,...,N and AH = {AHi }i=1,...,H denote the spaces of actions of N agents and H humans, respectively. O = {Oi}i=1,...,N+H denotes the space of observations of N agents and H humans. M represents the space of interpretable messages. P : S × AN × AH → S and r : S × AN × AH → R denote the shared state transition probability function and reward function of N agents, respectively. Note that r includes both individual rewards and team rewards. γ ∈ [0, 1) denotes the discount factor. For each agent i in state st ∈ S, it receives an observation oit ∈ Oi and a selected message cit ∈ M, and then outputs an action ait = πθ(o i t, c i t) ∈ ANi and a new message mit+1 = πϕ(oit) ∈ M, where πθ and πϕ are action network and message encoding network, respectively. A message selector cit = πω(o i t, Ct) is introduced to select a message cit from a message set Ct = {mit}i=1,...,N+H ⊂ M. We divide the HAC problem in MOBA games into the Human-to-Agent (H2A) and the Agent-toHuman (A2H) scenarios. The H2A Scenario: Humans send their macro-strategies as messages to agents, and agents select the optimal one to collaborate with humans based on their value systems. The A2H Scenario: Agents send their messages as macro-strategies to humans, and humans select the optimal one to collaborate with agents based on their value systems. The goal of both scenarios is that agents and humans communicate macro-strategies with pre-defined communication protocols and then select valuable macro-strategies for effective collaboration to win the game.
3 META-COMMAND COMMUNICATION-BASED FRAMEWORK
In this section, we present the proposed MCC framework in detail. We first briefly describe three key stages of the MCC framework (Section 3.1). Then we introduce its two pivotal modules: 1) an interpretable communication protocol, i.e., the Meta-Command, as a general representation of macro-strategies to bridge the communication gap between agents and humans (Section 3.2); 2) a meta-command value estimator, i.e., the Meta-Command Selector, to model the agent’s value system for meta-commands to achieve effective HAC in MOBA games (Section 3.3).
3.1 OVERVIEW
The process of the MCC framework consists of three stages: (I) the Meta-Command Conversion Stage, (II) the Meta-Command Communication Stage, and (III) the Human-Agent Collaboration
Stage, as shown in Figure 2. Notably, Stage I and II are executed at each communication step, and Stage III is executed at each time step. At Stage I, the MCC framework converts humans’ explicit messages and agents’ implicit messages into unified meta-commands mHt ,mt, respectively, and broadcasts them to all agents and humans. At Stage II, the MCC framework estimates the values of all received meta-commands Ct and selects the optimal one ct ∈ Ct for each agent to execute. The selected meta-command will remain unchanged between each two communication steps (e.g. within [t, t+ Tmc) time steps). At stage III, the MCC framework predicts a sequence of actions for each agent to perform based on its selected meta-command ct. In each game, humans and agents collaborate multiple times, that is, execute the three stages multiple times, to win the game.
3.2 META-COMMAND
We divide a macro-strategy into three components: where to go, what to do, and how long. For example, a macro-strategy can be Come And Kill The Dragon, which consists of Come To The Dragon Location (where to go), Attack The Dragon (what to do), and Until The Dragon Is Killed (how long). Thus, a general representation of macro-strategies, i.e., the Meta-Command, can be formulated as a tuple < L,E, Tmc >, as shown in Figure 1(b), where L is the Location to go, E is the Event to do after reaching L, and Tmc is the Time Limit for executing the meta-command.
Meta-Command Conversion. To realize bidirectional interpretable human-agent communication, the MCC framework converts humans’ explicit messages and agents’ implicit messages into unified metacommands. To achieve the former, we use the Command Converter function f cc (Appendix B.6.1) to extract corresponding location LH and event EH from explicit messages sent by humans in the in-game signaling system. To achieve the latter, we use a command encoder network (CEN) πϕ(m|o) to generate L and E based on the agent’s observation o. The CEN is trained via supervised learning (SL) with the goal of learning the distribution of meta-commands sent by humans (Appendix B.6.2). In MOBA game settings, we use a common location description, i.e., divide L of meta-commands in the map into 144 grids. Since the macro-strategy space is enormous (Gao et al., 2021), customizing corresponding rewards for each specific event to train the agent is not conducive to generalization and is even impossible. Instead, we train a micro-action network to learn to do optimal event E∗ at location L, just as humans do optimal micro-operations at location L based on their own value systems. We also do not specify a precise Tmc for the execution of each specific meta-command. Instead, we set Tmc to how long it takes a human to complete a macro-strategy in MOBA games. Usually, 20 seconds corresponds to an 80% completion rate, based on our statistics (Appendix B.6.2). Thus, the MCC framework converts humans’ explicit messages into meta-commands mH =< LH , EH , Tmc >, generates meta-commands m =< L,E∗, Tmc > for agents based on their observations (Figure 2(I)), and then broadcasts them to all agents and humans.
Meta-Command Execution. After receiving a meta-command candidate set, agents can select one meta-command from it to execute. Note that the MCC framework will replace EH with E∗ when the agent selects a meta-command from humans. We adopt the MCCAN πθ(a|o,m) for agents to perform actions based on the selected meta-command, as shown in Figure 3(a)(II). The MCCAN is trained via
self-play RL with the goal of achieving a high completion rate for the meta-commands while ensuring that the win rate is not reduced. To achieve this, we introduce extrinsic rewards r (including individual and team rewards) and intrinsic rewards rintt (st,mt, st+1) = ∣∣f ce(st)−mt∣∣ −∣∣f ce(st+1)−mt∣∣, where f ce extracts the agent’s location from state st, and
∣∣f ce(st)−mt∣∣ is the distance between the agent’s location and the meta-command’s location at time step t. Intuitively, the intrinsic rewards are adopted to guide the agent to reach L of the meta-command and stay at L to do some event E. The extrinsic rewards are adopted to guide the agent to perform optimal actions to reach L and do optimal event E∗ at L. Overall, the optimization objective is maximizing the expectation over extrinsic and intrinsic discounted total rewards Gt = Es∼dπθ ,a∼πθ [∑∞ i=0 γ irt+i + α ∑Tmc j=0 γ jrintt+j ] , where
dπ(s) = limt→∞ P ( st = s | s0, π ) is the probability when following π for t time steps from s0. We use α to weigh the intrinsic and extrinsic rewards.
After training the CEN and the MCCAN, we can achieve HAC by simply setting an agent to randomly select a meta-command derived from humans to execute. However, such collaboration is nonintelligent and can even be a disaster for game victory because agents have no mechanism to model meta-commands’ values and cannot choose the optimal meta-command to execute. While humans usually choose the optimal one based on their value systems for achieving effective collaboration to win the game. Thus, we propose a meta-command value estimator to model the agent’s value systems for meta-commands, as described in the following subsection.
3.3 META-COMMAND SELECTOR
In MOBA games, the same macro-strategy often has different values for different humans in different situations. For example, a macro-strategy can be Come And Kill The Dragon, as shown in Figure 1(b). It is more valuable for humans A and B and agent D to collaborate. However, another macro-strategy Clean Up Top-Lane Minions is more valuable for human C and agent E than the others. Therefore, agents must select the most valuable meta-command from the received meta-command candidate set C to achieve effective human-agent collaboration. We propose a meta-command value estimator, i.e., the Meta-Command Selector (CS) πω(o, C), to estimate the values of all received meta-commands and select the most valuable one for each agent to execute.
CS Optimization Objective. Typically, executing a meta-command involves reaching location L and doing event E, of which the latter is more important to the value of the meta-command. For example, for the meta-command Come And Kill The Dragon, if the event Kill The Dragon cannot be done within Tmc time steps, then it is pointless to Come To The Dragon. Thus, the optimization objective of CS is to select the optimal meta-command m∗t = πω(ot, Ct) for each agent to maximize the expected discounted meta-command execution return Gmct ,
Gmct = Es∼dπθ ,m∼πω,a∼πθ ∞∑ i=0 γimcR mc t+i·Tmc , Rmct = TL∑ i=0
rt+i︸ ︷︷ ︸ (I) + β
Tmc∑ j=TL rt+j
︸ ︷︷ ︸ (II) ,
where ot ∈ O, Ct is the meta-command candidate set in state st, γmc ∈ [0, 1) is the discount factor, and Rmct is a generalized meta-command execution reward function. For R mc t , (I) and (II) are the total extrinsic rewards r before reaching location L and doing event E, respectively. TL ≤ Tmc is the time for reaching L, and β > 1 is a trade-off parameter representing the relative importance of E.
CS Training Process. We construct a self-play training environment for CS, where agents can send messages to each other, as shown in Figure 3(a)(III). Specifically, three tricks are adopted to increase the sample efficiency while ensuring efficient exploration. First, each meta-command m is sampled with the argmax rule from the results outputted by the pre-trained CEN. Second, each agent sends its meta-command with a probability p every Tmc time steps. Finally, each agent selects the final meta-command c sampled with the softmax rule from its CS output results and hands it over to the pre-trained MCCAN for execution. We use the multi-head value mechanism (Ye et al., 2020a) to model the value of the meta-command execution, which can be formulated as:
LV (ω) = EO,C ∑ headk ∥Gmck − V kω (O,C)∥2 , where V kω (S,C) is the value of the k-th head. For DQN-based methods (Mnih et al., 2015; Van Hasselt et al., 2016; Wang et al., 2016), the Q loss is:
LQ(ω) = EO,C,M [ ∥Gtotal −Qkω(O,C,M)∥2 ] , Gtotal = ∑ headk wkG mc k ,
where wk is the weight of the k-th head and Gmck is the Temporal Difference (TD) estimated value error Rmck + γmcV k ω (O
′, C ′)− V kω (O,C). CS Model Structure. We design a general network structure for CS, as shown in Figure 3(b). In MOBA games, the meta-commands in adjacent regions have similar values. Thus, we divide the meta-commands in the map into grids, a common location description for MOBA games, and use the shared Convolutional Neural Network (CNN) to extract region-related information to improve the generalization of CS to adjacent meta-commands. Then, the map embeddings of all received meta-commands are integrated into a map set embedding by max-pooling. Besides, we use the gating mechanism (Liu et al., 2021) to fuse the map set embedding and the state embedding of the observation information. Finally, to directly construct the relationship between the observation information and each meta-command, we introduce a target attention module, where the query is the fused embedding and the key is the map embedding of each meta-command. The fused embedding is fed into the subsequent state-action value network Q(o, C,m) and state value network V (o, C) of CS. In this way, we can also easily convert the state-action value network Q(o, C,m) to the policy network π(m|o, C). Thus, the CS model structure can be easily applied to the most popular RL algorithms, such as PPO (Schulman et al., 2017), DQN (Mnih et al., 2015), etc.
4 EXPERIMENTS
In this section, we evaluate the proposed MCC framework by performing both agent-only and humanagent experiments in Honor of Kings. All experiments were conducted in the 5v5 mode with a full hero pool (over 100 heroes, see Appendix A.4).
4.1 EXPERIMENTAL SETUP
Due to the complexity of MOBA games and limited resources, we train the CEN, the MCCAN, and the CS sequentially instead of training the MCC framework jointly. Specifically, we first train the CEN via SL until it converges for 26 hours using 8 NVIDIA P40 GPUs. The batch size of each GPU is set to 512. Then, we train the MCCAN by fine-tuning the pre-trained WuKong model (Ye et al., 2020a) conditioned on the meta-command sampled from the pre-trained CEN. The MCCAN is trained until it converges for 48 hours using a physical computer cluster with 63,000 CPUs and 560 NVIDIA V100 GPUs. The batch size of each GPU is set to 256. The parameter α is set to 16. After that, we train the CS via self-play until it converges for 24 hours using a physical computer cluster with 70,000 CPUs and 680 NVIDIA V100 GPUs. The batch size of each GPU is set to 256. The parameter β is set to 2. Each agent sends a meta-command with a probability p of 0.8 and an interval Tmc of 20s, as shown in Figure 4(a). For the entire training process of the MCC framework, the location L of meta-commands in the game map is divided into 144 grids, and the time limit Tmc for the meta-command execution is set to 20s. Finally, we obtain the trained MCC agent that can receive meta-commands from other agents and humans and select the most valuable one to execute.
To evaluate the performance of the MCC agent, we conduct both agent-only and human-agent experiments and compare the MCC agent with three different types of agents: the MC-Base agent (agent only executes its own meta-command without communication), the MC-Rand agent (agent randomly selects a meta-command to execute), and the MC-Rule agent (agent selects the nearest meta-command to execute). Note that the MC-Base agent can be considered as a State-Of-The-Art (SOTA) in Honor of Kings since it maintains the same capabilities as the WuKong agent (Ye et al., 2020a) (Appendix B.6.3). Results are reported over five random seeds.
4.2 AGENT-ONLY COLLABORATION
Directly evaluating agents with humans is expensive, which is not conducive to model selection and iteration. Instead, we built two agent-only testing environments, Test I and Test II, to evaluate agents, as shown in Figure 4(b). Test I is a complex environment where all agent teammates can send and receive meta-commands simultaneously with an interval of 20s. Test I evaluates the agents’ performance under complex situations. Test II is a simple environment to simulate most practical game scenarios, where at most one human can send his/her macro-strategy at a time step. Thus, in Test II, only one agent is randomly selected to send its meta-command with an interval of 20s, and the other agents only receive meta-commands. See the detailed experimental results of the CEN and MCCAN in Appendixes B.6.2 and B.6.3, respectively.
Finding 1: MCC outperforms all baselines.
We first evaluate the capabilities of the MCC agent and baselines to examine the effectiveness of CS. Figure 5(a) and (b) show the win rates (WRs) of four types of agent teams who play against each other for 600 matches in Test I and Test II, respectively. Figure 5(c) demonstrates the final Elo scores (Coulom, 2008) of these agents. We see that the MCC agent team significantly outperforms the MC-Rand and MC-Rule agent teams. This indicates that compared to selecting meta-commands randomly or by specified rules, the CS can select valuable meta-commands for agents to execute, resulting in effective collaboration. Such collaboration manners in the MCC agent team can even be conducive to winning the game, bringing about 10% WR improvement against the MC-Base agent team. On the contrary, the unreasonable collaboration manners in the MC-Rand and MC-Rule agent teams can hurt performance, leading to significant decreases in the WR against the MC-Base agent team. Note that the MC-Base agent has the same capabilities as the WuKong agent (Ye et al., 2020a), the SOTA in Honor of Kings. Overall, the MCC agent achieves the highest Elo scores compared to all baselines in both testing environments, validating the effectiveness of CS. Notably, we also find that the WRs of the MCC agent in Test I and Test II are close, suggesting that the MCC agent can generalize to different numbers of meta-commands. We also investigate the influence of different components, including CNN feature extraction with the gating mechanism (Liu et al., 2021), target attention module, and optimization algorithms on the performance of CS (Appendix B.6.4).
4.3 HUMAN-AGENT COLLABORATION
In this section, we conduct an online experiment to evaluate the MCC agent and baselines in collaborating with humans, as shown in Figure 4(c). We contacted the game provider and got a
test authorization. The game provider helped us recruit 30 experienced participants with personal information stripped, including 15 high-level (top1%) and 15 general-level (top30%) participants. We used a within-participant design: m Human + n Agent (mH + nA) team mode to evaluate the performance of agents teaming up with different numbers of participants, where m+ n = 5. This design allowed us to evaluate both objective performances as well as subjective preferences.
All participants read detailed guidelines and provided informed consent before the testing. Participants tested 20 matches for the 1H + 4A team mode. High-level participants tested additional 10 matches for the 2H + 3A and the 3H + 2A team modes, respectively. After each test, participants reported their preference over the agent teammates. For fair comparisons, participants were not told the type of their agent teammates. The MC-Base agent team was adopted as the fixed opponent for all tests. To eliminate the effects of collaboration between agents, we prohibit communication between agents. Thus the agents can only communicate with their human teammates. See Appendix C for additional experimental details, including experimental design, result analysis, and ethical review.
Finding 1: Human-MCC team achieves the highest WR across team modes and human levels.
We first compare the human-agent team objective performance metrics supported by the MCC agent and baselines, as shown in Table 1. We see that the human-MCC team significantly outperforms all other human-agent teams across different team modes and human levels. This indicates that the MCC agent can generalize to different levels and numbers of human teammates. Note that the SOTA agent can easily beat the high-level human players (Nair, 2019; Chen, 2021). So as the number of participants increases, the WRs of all human-agent teams decrease. Surprisingly, the WR increased significantly when the participants teamed up with the MCC agent. We suspect that the human-MCC team has also achieved effective communication and collaboration on macro-strategies. To verify this, we count the Response Rates (RRs) of agents and participants to the meta-commands sent from their teammates, as shown in Table 2. We find that the RRs of the MCC agents to high-level participants (73.05%) and the high-level participants to the MCC agents (78.5%) are close to the RR of high-level participants themselves (74.91%). This suggests that the CS is close to the value system of high-level humans. Besides, the RRs of participants to the MCC agents (73.43% and 78.5%) are higher than those of the MC-Rand agents (41.07% and 35.69%), indicating that participants collaborated with the MCC agents more often and more effectively.
Finding 2: Participants prefer MCC over all baselines.
We then compare the subjective preference metrics, i.e., the Reasonableness of H2A, the Reasonableness of A2H, and the Overall Preference, reported by participants over their agent teammates, as
shown in Figure 6. Participants believed that the MCC agent responded more reasonably and gave the highest score in the Reasonableness of the H2A metric (Figure 6(a)). Besides, participants also believed that the meta-commands sent by the MCC agent are more aligned with their own value system and rated the MCC agent much better than the MC-Rand agent in the Reasonableness of A2H metric (Figure 6(b)). In general, participants were satisfied with the MCC agent over the other agents and gave the highest score in the Overall Preference metric (Figure 6(c)). The results of these subjective preference metrics are also consistent with the results of objective performance metrics.
4.4 COLLABORATIVE INTERPRETABILITY ANALYSIS
To better understand how the MCC agents and humans can collaborate effectively and interpretably. We visualize the comparison of CS and high-level participants’ value systems on a game scene with three meta-commands existing, as shown in Figure 7. We see that the CS selects the meta-command B for the two heroes in the red dashed box to collaborate, selects the meta-command C for the two heroes in the purple dashed box to collaborate, and selects the meta-command A for the remaining hero to execute alone. The CS selection results are consistent with the ranking results of high-level participants, confirming the effectiveness of the collaboration behaviors between the MCC agents and humans. Such collaboration enhances the human interpretability of agent behavior.
5 CONCLUSION
In this work, we proposed an efficient and interpretable Meta-Command Communication-based framework, dubbed MCC, to achieve effective human-agent collaboration in MOBA games. To bridge the communication gap between humans and agents, we designed the Meta-Command - a common ground between humans and agents for bidirectional communication. To achieve effective collaboration, we constructed the Meta-Command Selector - a value estimator for agents to select valuable meta-commands to collaborate with humans. Experimental results in Honor of Kings demonstrate that MCC significantly outperforms all baselines in both objective performance and subjective preferences, and it could even generalize to different levels and numbers of humans.
A ENVIRONMENT DETAILS
A.1 GAME INTRODUCTION
Honor of Kings is one of the most popular MOBA games worldwide. The gameplay is to divide ten players into two camps to compete on the same symmetrical map. Players of each camp compete for resources through online confrontation, team collaboration, etc., and finally win the game by destroying the enemy’s crystal. The behaviors performed by players in the game can be divided into two categories: macro-strategies and micro-operations. Macro-strategy is long-distance scheduling or collaborating with teammates for quick resource competition, such as long-distance support for teammates, collaborating to compete for monster resources, etc. Micro-operation is the real-time behavior adopted by each player in various scenarios, such as skill combo release, evading enemy skills, etc. Complicated game maps, diverse hero combinations, diverse equipment combinations, and diverse player tactics make MOBA games extremely complex and exploratory.
A.2 GAME ENVIRONMENT
Figure 8 shows the UI interface of Honor of Kings. For fair comparisons, all experiments in this paper were carried out using a fixed released game engine version (Version 3.73 series) of Honor of Kings.
A.3 IN-GAME SIGNALING SYSTEM
Figure 9 demonstrates the in-game signaling system of Honor of Kings. Players can communicate and collaborate with teammates through the in-game signaling system. In the Human-Agent Collaboration Test, humans can send macro-strategies to agents through signals like A in Figure 9, and these signals are displayed to teammates in the form of D. The MCC framework converts these explicit messages, i.e., signals, into meta-commands by the hand-crafted command converter function f cc and broadcasts them to all agent teammates. Moreover, the MCC framework can also convert the meta-commands sent by agents into signals by the inverse of f cc and broadcast them to all human teammates.
Voice (B.2) and text (B.1 and B.3) are two other forms of communication. In the future, we will consider introducing a general meta-command encoding model that can handle all forms of explicit messages (signals, voice, and text).
A.4 HERO POOL
Table 3 shows the full hero pool and 20 hero pool used in Experiments. Each match involves two camps playing against each other, and each camp consists of five randomly picked heroes.
B FRAMEWORK DETAILS
B.1 INFRASTRUCTURE DESIGN
Figure 10 shows the infrastructure of the training system (Ye et al., 2020a), which consists of four key components: AI Server, Inference Server, RL Learner, and Memory Pool. The AI Server (the actor) covers the interaction logic between the agents and the environment. The Inference Server is used for the centralized batch inference on the GPU side. The RL Learner (the learner) is a distributed training environment for RL models. And the Memory Pool is used for storing the experience, implemented as a memory-efficient circular queue.
Training complex game AI systems often require a large number of computing resources, such as AlphaGo Lee Sedol (280 GPUs), OpenAI Five Final (1920 GPUs), and AlphaStar Final (3072 TPUv3
cores). We also use hundreds of GPUs for training the agents. A worthy research direction is to improve resource utilization using fewer computing resources.
B.2 REWARD DESIGN
Table 4 demonstrates the details of the designed environment reward.
Table 4: The details of the environment reward.
Head Reward Item Weight Type Description
Farming Related
Gold 0.005 Dense The gold gained. Experience 0.001 Dense The experience gained. Mana 0.05 Dense The rate of mana (to the fourth power). No-op -0.00001 Dense Stop and do nothing. Attack monster 0.1 Sparse Attack monster.
KDA Related
Kill 1 Sparse Kill a enemy hero. Death -1 Sparse Being killed. Assist 1 Sparse Assists. Tyrant buff 1 Sparse Get buff of killing tyrant, dark tyrant, storm tyrant. Overlord buff 1.5 Sparse Get buff of killing the overlord. Expose invisible enemy 0.3 Sparse Get visions of enemy heroes. Last hit 0.2 Sparse Last hitting an enemy minion.
Damage Related Health point 3 Dense The health point of the hero (to the fourth power).Hurt to hero 0.3 Sparse Attack enemy heroes.
Pushing Related Attack turrets 1 Sparse Attack turrets.Attack crystal 1 Sparse Attack enemy home base.
Win/Lose Related Destroy home base 2.5 Sparse Destroy enemy home base.
B.3 FEATURE DESIGN
B.3.1 CEN
See Table 5.
B.3.2 MCCAN
See Table 6.
B.3.3 CS
See Table 7.
B.4 AGENT ACTION
Table 8 shows the action space of agents.
B.5 NETWORK ARCHITECTURE
B.5.1 CEN
Figure 11 shows the detailed model structure of CEN. The CEN network extracts game stats features and serval unit features from the observation o. Each unit feature is encoded by shared MLP layers to obtain serval unit embeddings, and the game stats features are encoded by MLP layers to obtain the game stats embedding. Finally, we concatenate the unit embeddings and the game stats embedding, and output the probability distribution of the meta-commands. The outputted meta-command indicates the macro-strategy for future Tmc time steps.
B.5.2 MCCAN
Figure 12 shows the detailed model structure of MCCAN. The MCCAN predicts a sequence of actions for each agent based on its observation and the meta-command sampled from the Top-k Softmax distribution of CEN. The observations are processed by a deep LSTM, which maintains memory among steps. We use the target attention mechanism to improve the model prediction accuracy, and we design the action mask module to eliminate unnecessary actions for efficient exploration. To manage the uncertain value of state-action in the game, we introduce the multi-head value estimation (Ye et al., 2020a) into the MCCAN by grouping the extrinsic rewards in Table 4. We also treat the intrinsic rewards as a value head and estimate it using a separate value network. Besides, we introduce a value mixer module (Rashid et al., 2018) to model team value to improve the accuracy of the value estimation. All value networks consist of an FC layer with LSTM outputs as input and output a single value, respectively. Finally, following Ye et al. (2020a) and Gao et al. (2021), we adopt hierarchical heads of actions, including three parts: 1) What action to take; 2) who to target; 3) how to act.
Network Parameter Details. We develop 6 channels of spatial features read from the game engine with resolution 6*17*17, and then use 5*5 and 3*3 CNN to sequentially extract features. The LSTM unit sizes are 1024 and the time steps are 16. The k value in the CEN sampling is set to 20.
B.5.3 CS
Figure 13 shows the detailed model structure of CS. For each agent, the CS predicts the value Q(o, C,m) or probability π(m|o, C) of each meta-command m based on its observations o and all received meta-commands C. First, all received meta-commands (mH from human and m from
the CEN) is reshaped to a 2D image (12*12*1). Then, we use a shared CNN to extract the regionrelated information from meta-commands, i.e., zHm =CNN(m
H) and zm =CNN(m). Besides, the map embeddings of all received meta-commands are integrated into a map set embedding by max-pooling, i.e., zms =Max-pooling(zHm , zm). After that, we use the Gating Mechanism to fuse the map set embedding and the state embedding zo =MLP(o) of the observation information o. For the Gating Mechanism, we use the state embedding zo and map set embedding zms to calculate the attention to each other, that is, the gating gms =MLP(zo) for zms and the gating go =MLP(zms) for zo. Gating will be used to retain attentional information, as well as fused them, i.e. zf = concat[zo · go + zo; zms · gms + zms]. Finally, the fused embedding zf and map embeds {zHm , zm} will be used as key kf and query {qHm , qm}, respectively, and input into the Target Attention module to predict the state-action value (Q-value) or the probability. We also use the fused embedding to estimate the state value (V -value) from observations o and all meta-commands C.
B.6 DETAILS AND ANALYSIS OF COMPONENTS
B.6.1 META-COMMAND
Command Converter Function f cc. The f cc extracts the corresponding location and event information from human messages. Specifically, the game engine converts rough messages from the in-game signaling system into predefined protocol buffers (PB), and then the MCC framework uses the f cc to extract the location and event information from the received PB to construct the corresponding meta-command.
Command Extraction Function f ce. The f ce directly extracts the agent’s location from the current state (the position feature in Table 6), which is used to obtain the intrinsic rewards by calculating the "distance" from the agent to the currently executed meta-command.
B.6.2 CEN
Training Data. We extract meta-commands from expert game replay authorized by the game provider, which consist of high-level (top 1% player) license game data without identity information. The input features of CEN are shown in Table 5. The game replay consists of multiple frames, and the information of each frame is shown in Figure 8. We divide the location L of meta-commands in the map into 144 grids. And we set the event E to be all units (103 in total) in the game. For setting Tmc, we counted the player’s completion time for meta-commands from expert game replay, and the
results are shown in Figure. 14. We can see that 80% meta-commands can be completed within the time of 20 seconds in Honor of Kings. Thus, Tmc is set to 300 time steps (20 seconds).
Given a state st in the trajectory, we first extract the observation ot for each hero. Then, we use a hand-crafted command extraction function f ce to extract the meta-command mt = f ce(st+Tmc) corresponding to the future state st+Tmc . By setting up labels in this way, we expect the CEN πϕ(m|o) to learn the mapping from the observation ot to its corresponding meta-command mt. The detailed training data extraction process is as follows:
• First, extract the trajectory τ = (s0, . . . , st, . . . , st+Tmc , . . . , sN ) from the game replay, where N is the total number of frames.
• Second, randomly sample some frames {t|t ∈ {0, 1, . . . , N}} from the trajectory τ . • Third, extract feature ot from state st for each frame {t|t ∈ {0, 1, . . . , N}}. • Fourth, extract the location Lt and the event Et from the state st+Tmc in frame t+ Tmc. • Fifth, use Lt and Et to construct the label mt =< Lt, Et, Tmc >. • Finally, < ot,mt > is formed into a pair as a sample in the training data.
Optimization Objective. After obtaining the dataset {< o,m >}, we train the CEN πϕ(m|o) via supervised learning (SL). Due to the imbalance of samples at different locations of the metacommands, we use the focal loss (Lin et al., 2017) to alleviate this problem. Thus, the optimization objective is:
LSL(ϕ) = EO,M [ −αm(1− πϕ(o))γ log(πϕ(o))− (1− α)(1−m)πϕ(o)γ log(1− πϕ(o)) ] ,
where α = 0.75 is the balanced weighting factor for positive class (m = 1) and γ = 2 is the tunable focusing parameter. Adam (Kingma & Ba, 2014) is adopted as the optimizer with an initial learning rate of 0.0001. Especially, We compute the focal loss for L and E, respectively.
Experimental Results. Figure 15 shows the meta-command distributions of the initial CEN, the converged CEN, and high-level humans. We see that the meta-commands predicted by the CEN gradually converge from chaos to the meta-commands with important positions. The Kullback-Leibler (KL) divergence of the meta-command distribution between the CEN and high-level humans decreases from 4.96 to 0.44 as training converges. The distribution of the converged CEN in Figure 15(b) is close to the distribution of high-level humans in Figure 15(c), suggesting that the CEN can simulate the generation of human meta-commands in real games.
B.6.3 MCCAN
Training Environment. We use a similar training environment as the WuKong agent and the OpenAI Five agent, i.e., the agent interacts with the environment in a self-play manner. Specifically, for
the MCCAN training, each of the 10 agents always executes its own meta-command generated by the CEN. And for each agent, every Tmc time steps, the pre-trained CEN πϕ(m|o) generates a meta-command mt based on its observation ot. The meta-command mt is then kept for Tmc time steps and sent to the MCCAN continuously. During the interval [t, t+ Tmc], the MCCAN predicts a sequence of actions based on ot and mt for the agent to perform.
Optimization Objective. The MCCAN is trained with the goal of achieving a higher completion rate for the meta-commands generated by the pre-trained CEN while ensuring that the win rate is not reduced. To achieve this, we introduce extrinsic rewards (including individual and team rewards) and intrinsic rewards
rintt (st,mt, st+1) = ∣∣f ce(st)−mt∣∣−∣∣f ce(st+1)−mt∣∣ ,
where f ce extracts the agent’s location from state st, and ∣∣f ce(st)−mt∣∣ is the distance between the agent’s location and the meta-command’s location in time step t. Intuitively, the intrinsic rewards are adopted to guide the agent to the location L of the meta-command and stay at L to do some event E. The extrinsic rewards are adopted to guide the agent to perform optimal actions to reach L and do the optimal event at L. Overall, the optimization objective is maximizing the expectation over extrinsic and intrinsic discounted total rewards Gt = Es∼dπθ ,a∼πθ [∑∞ i=0 γ irt+i + α ∑Tmc j=0 γ jrintt+j ] , where
dπ(s) = limt→∞ P ( st = s | s0, π ) is the probability when following π for t steps from s0. We use α to weigh the intrinsic rewards and the extrinsic rewards. The experimental results about the influence of α on the performance of the MCCAN are shown in Figure 16.
Training Process. The MCCAN is trained by fine-tuning a pre-trained WuKong model (Ye et al., 2020a) conditioned on the meta-command sampled from the pre-trained CEN. Note that the WuKong model is the State-Of-The-Art (SOTA) model in Honor of Kings, which can easily beat the high-level human players 1 2.
We also modified the Dual-clip PPO algorithm (Ye et al., 2020a) to introduce the meta-command m into the policy πθ(at|ot,mt) and the advantage estimation At = A(at, ot,mt). The Dual-clip PPO algorithm introduces another clipping parameter c to construct a lower bound for rt(θ) = πθ(at|ot,mt) πθold (at|ot,mt) when At < 0 and rt(θ) ≫ 0. Thus, the policy loss is:
Lπ(θ) = Es,m,a[max(cAt,min(clip(rt(θ), 1− τ, 1 + τ)At, rt(θ)At)],
where τ is the original clip parameter in PPO. And the multi-head value loss is: LV (θ) = Es,m[ ∑ headk (Gkt − V kθ (ot,mt))], Vtotal = ∑ headk wkV k θ (ot,mt),
where wk is the weight of the k-th head and V kt (ot,mt) is the k-th value. 1Wukong AI beats human players to win Honour Of Kings mobile game, https://www.thestar.co
m.my/tech/tech-news/2019/08/07/tencent039s-ai-beats-human-players-to-win -honour-of-kings-mobile-game
2Honor of Kings Wukong AI 3:1 defeated the human team, https://www.sportsbusinessjourna l.com/Esports/Sections/Technology/2021/07/HoK-AI-Battle.aspx?hl=KPL&sc=0
Experimental Results. We conducted experiments to explore the influence of the extrinsic and intrinsic reward trade-off parameter α on the performance of MCCAN. The win rate and completion rate results are shown in Figure 16. As α increases, the completion rate of MCCAN gradually increases, and the win rate of MCCAN first increases and then decreases rapidly. Notably, we can train an agent with a completion rate close to 100% by increasing α, but this will significantly reduce the win rate because the meta-command executed is not necessarily optimal and may result in the death of agents.
When α = 16, the completion rate of the trained agent for meta-commands is 82%, which is close to the completion rate of humans (80%). And the win rate of the trained agent against the SOTA agents (Ye et al., 2020a; Gao et al., 2021) is close to 50%. Thus, we finally set α = 16 .
B.6.4 CS
Additional Results in Agent-Only Collaboration Testing. To eliminate the transitivity of WR, we added additional experiments to compare the MCC agent and baseline agents to the WuKong (SOTA) agent for each 600 matches in Test I Environment. The WRs (P1 versus P2) are shown in Table 9. We see that the effective CS mechanism in the MCC agent enables the agents to communicate and collaborate effectively, which improves the WR compared to the WuKong agent (SOTA).
Ablation Studies. We further investigate the influence of different components on the performance of CS, including CNN feature extraction with the gating mechanism (w/o CNN-GM), target attention module (w/o TA), and PPO optimization algorithm (MCC-PPO). We conduct ablation studies in Test I with a 20 hero pool. In practical games, meta-commands with adjacent regions often have similar intentions and values. Thus the response rate of the agent to adjacent meta-commands should be as close as possible. Besides, the higher the agent’s response rate to meta-commands, the more collaborative the agent’s behaviors. Thus we expect the response rate of CS to be as high as possible. Generally, we expect CS’s Response Rate (RR) to be as high as possible while ensuring that the Win Rate (WR) maintains.
Figure 17(a) demonstrates the WR of different CS ablation versions during the training process, and Figure 17(b) shows the converged WR-RR results. We see that after ablating the TA module, the WR and RR of CS reduce significantly, indicating that the TA module can improve the accuracy of CS to meta-commands. Besides, after ablating the CNN-GM module, the RR of CS is most affected, which is reduced by 20%. It indicates that without the CNN-GM module, the value estimation of CS to adjacent meta-commands is not accurate enough, resulting in missing some highly valuable meta-commands. We notice that the MCC and MCC-PPO in both metrics are close, confirming the versatility of the CS model structure.
C DETAILS OF HUMAN-AGENT COLLABORATION TEST
C.1 ETHICAL REVIEW
The ethics committee of a third-party organization, Tencent (the parent company of Honor of Kings), conducted an ethical review of our project. They reviewed our experimental procedures and risk avoidance methods (see Appendix C.1.2). They believed that our project complies with the "New Generation of AI Ethics Code" 3 of the country to which the participants belonged (China), so they approved our study. In addition, all participants consented to the experiment and provided informed consent (see Appendix C.1.1) for the study.
C.1.1 INFORMED CONSENT
All participants were told the following experiment guidelines before testing:
• This experiment is to study human-agent collaboration technology in MOBA games.
• Your identity information will not be disclosed to anyone.
• All game statistics are only used for academic research.
• You will be invited into matches where your opponents and teammates are agents.
• Your goal is to win the game as much as possible by collaborating with agent teammates.
• You can communicate and collaborate with agent teammates through the in-game signaling system.
• Agent teammates will also send you signals representing their macro-strategies, and you can judge whether to execute them based on your value system.
• After each test, you can report your preference over the agent teammates.
• Each game lasts 10-20 minutes.
• You may voluntarily choose whether to take the test. You can terminate the test at any time if you feel unwell during the test.
• After all tests are complete, you may voluntarily fill out a debrief questionnaire to tell us your open-ended feedback on the experiment.
• At any time, if you want to delete your data, you can contact the game provider directly to delete it.
If participants volunteer to take the test, they will first provide written informed consent, then we will provide them with the equipment and game account, and the test will begin.
3China: MOST issues New Generation of AI Ethics Code, https://www.dataguidance.com/n ews/china-most-issues-new-generation-ai-ethics-code
C.1.2 POTENTIAL PARTICIPANT RISKS
First, we analyze the risks of this experiment to the participants. The potential participant risks of the experiment mainly include the leakage of identity information and the time cost. And we have taken a series of measures to minimize these risks.
Identity Information. A series of measures have been taken to avoid this risk:
• All participants will be recruited with the help of a third party (the game provider of Honor of Kings), and we do not have access to participants’ identities.
• We make a risk statement for participants and sign an identity information confidentiality agreement under the supervision of a third party.
• We only use unidentifiable game statistics in our research, which are obtained from third parties. • Special equipment and game accounts are provided to the participants to prevent equipment and
account information leakage. • The identity information of all participants is not disclosed to the public.
Time Cost. We will pay participants to compensate for their time costs. Participants receive $5 at the end of each game test, and the winner will receive an additional $2. Each game test takes approximately 10 to 20 minutes, and participants can get about an average of $20 an hour.
C.2 EXPERIMENTAL DETAILS
C.2.1 PARTICIPANT DETAILS
We contacted the game provider and got a test authorization. The game provider helped us recruit 30 experienced participants with personal information stripped, including 15 high-level (top1%) and 15 general-level (top30%) participants. All participants have more than three years of experience in Honor of Kings and promise to be familiar with all mechanics in the game, including the in-game signaling system in Figure 9.
And special equipment and game accounts are provided to each participant to prevent equipment and account information leakage. The game statistics we collect are only for experimental purposes and are not disclosed to the public.
C.2.2 EXPERIMENTAL DESIGN
We used a within-participant design: m Human + n Agent (mH + nA) team mode to evaluate the performance of agents teaming up with different numbers of participants, where m+ n = 5. Each participant is asked to randomly team up with three different types of agents, including the MC-Base agents, the MC-Rand agents, and the MCC agents. For fair comparisons, participants were not told the type of their agent teammates. The MC-Base agent team was adopted as the fixed opponent for all tests. Participants tested 20 matches for the 1H + 4A team mode. High-level participants tested additional 10 matches for the 2H + 3A and the 3H + 2A team modes, respectively. After each game test, participants reported their preference over the agent teammates.
We prohibit communication between agents to eliminate the effects of collaboration. Thus the agents can only communicate with their human teammates. In each game test, humans can send the converted meta-commands whenever they think their macro-strategies are important. Furthermore, to make the agent behave like humans (at most one human sends his/her meta-command at a time step), we restricted agents from sending their meta-commands, i.e., only one agent sends a valuable meta-command to human teammates from the agents’ current meta-commands. Specifically, this process consists of several steps: (1) The MCC framework randomly chooses a human teammate (note that human and agent observations are shared); (2) The MCC framework uses his/her observation and all agents’ meta-commands as the input of the CS, and obtained the estimated values of these meta-commands; (3) The MCC framework selects the meta-command with the highest value; (4) The MCC framework selects the corresponding agent to send the meta-command. The above process is executed at an interval of 20 seconds.
In addition, as mentioned in Ye et al. (2020a); Gao et al. (2021), the response time of agents is usually set to 193ms, including observation delay (133ms) and response delay (60ms). The average APM of
agents and top e-sport players are usually comparable (80.5 and 80.3, respectively). To make our test results more accurate, we adjusted the agents’ capability to match high-level humans’ performance by increasing the observation delay (from 133ms to 200ms) and response delay (from 60ms to 120 ms).
C.2.3 PREFERENCE DESCRIPTION
After each test, participants gave scores on several subjective preference metrics to evaluate their agent teammates, including the Reasonableness of H2A (how well agents respond to the meta-commands sent by participants), the Reasonableness of A2H (how reasonable the meta-commands sent by agents are), and the Overall Preference for agent teammates.
For each metric, we provide a detailed problem description and a description of the reference scale for the score. Participants rated their agent teammates based on how well their subjective feelings matched the descriptions in the test. The different metrics are described as follows:
• For the Reasonableness of H2A, "Do you think the agent teammates respond reasonably to your commands? Please evaluate the reasonableness according to the following scales."
1) Terrible: No response, or totally unreasonable. 2) Poor: Little response, or mostly unreasonable. 3) Normal: Response, but some unreasonable. 4) Good: Response, mostly reasonable. 5) Perfect: Response, and perfect.
• For the Reasonableness of A2H, "Do you think the commands sent by the agent teammates are reasonable? Please evaluate the reasonableness according to the following scales. Note that if you don’t receive any commands, please ignore this question."
1) Terrible: Totally unreasonable. 2) Poor: Low reasonable. 3) Normal: Some reasonable. 4) Good: Mostly reasonable. 5) Perfect: Totally reasonable.
• For the Overall Preference, "What is your overall preference for the agent teammates collaborating with you? Please rate the following words according to your subjective feelings: 1) Terrible; 2) Poor; 3) Normal; 4) Good; 5) Perfect".
C.2.4 ADDITIONAL SUBJECTIVE PREFERENCE RESULTS
Detailed subjective preference statistics are presented in Table 10. Compared with no collaboration (MC-Base) and random collaboration (MC-Rand), the participants preferred reasonable collaboration (MCC).
Reasonableness of H2A. Participants prefer MC-Rand over MC-Base, suggesting that they expect the agent to respond more to their commands. Nonetheless, the score of MCC is much higher than MC-Rand, indicating that participants prefer their commands to get reasonable rather than incorrect responses.
Reasonableness of A2H. Participants express a significant preference for MCC over MC-Rand, demonstrating that they agree with MCC’s commands and are more willing to collaborate. The results are consistent with Figure 7, where participants believe the commands sent from MCC align more with their own value system. Note that the non-collaborative setting prohibits MC-Base from sending any commands, so we made no statistics.
Overall Preference. Participants are satisfied with the MCC agent over other agents and give the highest score. By comparing MC-Base and MCC, we can observe that human-agent collaboration based on meta-command communication can bring a better impression of the human-agent team in the MOBA game. However, while MC-Rand is higher than MC-Base in the Reasonableness of H2A metric, it is lower than MC-Base in the Overall Preference metric. Therefore, collaboration is important but not as necessary as winning. This metric further confirms the reasonableness of MCC in human-agent collaboration.
D DISCUSSION
Limitations and future work. First, the training process of the MCC agent consumes vast computing resources like other SOTA MOBA agents. Thus, we will optimize the training process of existing MOBA agents, aiming to lower the threshold for researchers to study and reproduce work in MOBA games. Second, the meta-commands we proposed are generic to MOBA games and cannot be directly extended to other types of games, such as FPS and Massively Multiplayer Online (MMO). We will design a more general meta-command representation, such as natural language, and extend the MCC framework to other games. Third, we will apply the MCC agents to the friendly bots in the teaching mode of Honor of Kings. All in all, we would hope that this work can not only offer new ideas to researchers but also bring a better human-agent collaboration experience in practical applications. | 1. What is the main contribution of the paper regarding human-agent collaboration in MOBA games?
2. What are the strengths and weaknesses of the proposed framework, particularly in its conceptual simplicity and potential extension beyond MOBA or gaming?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content, especially regarding implementation details, missing reproducibility details, and inconsistent notation?
4. What are the specific questions raised by the reviewer regarding the paper, such as win rates, baselines, command representation, message dependence, training details, model used for cloning expert data, and others? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The paper proposes a framework for human-agent collaboration in MOBA games. The framework consists in two main parts: i) a protocol of commands that human and agents use to communicate with each other, and that can be expressed as a tuple of "what" the sender asks the receiver to do, for "how long" has to be done, and "where" the action has to happen; and ii) a hierarchical RL approach the agents follow to learn what commands to send, and to which commands they should respond - the approach is hierarchical because the learning process is done sequentially in three independent stages: first, message generation is leant by cloning the behaviour of expert data; second, the low level actions required to accomplish a command are learnt by fine tuning a pretrained model, such that the agents learn to complete the tasks described by the commands without decreasing the win rate; third, the agents learn to select which message among all the messages received (e.g., one per human player). Experiments show the proposed framework results in increased win rates with respect to agents that do not communicate with other players (i.e., they only execute their own commands), and that human players prefer the proposed agents with respect to agents that do not communicate.
Strengths And Weaknesses
Strengths
The proposed framework is conceptually simple and might be extended to other scenarios (beyond MOBA or even beyond gaming) where humans and agents communicate at the strategic level.
The whole design is full of interesting design choices that will likely inspire future research, like the reward function to fine tuning the low-level actions network, the command representation as a grid that it can be learnt with a CNN, or the command selector architecture that combines the gating mechanism with attention
The paper establishes a baseline for a challenging and relevant problem.
Weaknesses
Many implementation details are missing (see comments on reproducibility below).
Clarity could be improved (see comments below)
Clarity, Quality, Novelty And Reproducibility
Quality and technical details
The baselines used for the experiments are rather poor:
In order to put the win rates in context, what are the win rates for 5A teams?
MC Rand seems useful as a sanity check only, and MC Base agents do not communicate at all. Hence, the relative results in the subjective experiments are not very illustrative. Preference of collaborating with other human (general and high) players is required, in other words, do human players prefer MCC over other human players?
Shouldn't
G
t
include the expectation over meta-commands?
Why is the message
m
t
i
+
1
dependent only on the observation,
o
t
i
, and not on the selected message,
c
t
i
?
It is said "MC-Base agent has the same capabilities as the WuKong agent". However, the MC Base agents have been fine-tuned, could fine tuning have decreased the performance w.r.t. the original WuKong agent?
Missing reproducibility details
Please include details on the hand-crafted command converter function
f
c
c
and the hand-crafted command extraction function
f
c
e
. Could you please be more explicit how these functions relate to Tables 4, 5 and 6?
Tables 5, 6 and 7 describe the data type and the dimensionality, could you be more specific on the values (for example, scalar value refers to the number of elements, is it a normalised value, etc.)?
Please provide details on the value networks
V
t
i
n
t
(
o
t
,
m
t
)
and
V
t
e
x
t
(
o
t
)
used to train the meta-command conditioned action network.
Appendix B.5 explains high level details of the networks, but lower-level details are missing, like the number of hidden layers, their size and activation functions, the value of
k
for the Top-
k
SoftMax activation. In addition, explanations on the rationale for choosing these parameters would be much appreciated.
Please provide details on the model used for cloning the expert data (Sec. B.6.1 describes the loss function but not the model). Also, what is the total number of frames,
N
, per trajectory?
Which RL algorithm is used to train the Command Selector network? (PPO is mentioned for the low-level actions (MCCAN) but haven't found a similar discussion for the CS network).
Training details are also missing, for example B.6.1 mentions the initial learning rate, but says nothing about the learning rate schedule or the other parameters.
According to Figure 3,
V
t
m
c
and
Q
t
i
seem to refer to multiple heads. Does
V
t
m
c
propagate to the shared CNN? Do the h
Q
t
i
heads propagate to the gating and observation MLPs?
In Figure 4(b), Test II, there is a simulated human. How has this agent been trained
Is the number of steps at the human-agent collaboration stage,
n
, fixed? or is it dependent on the meta-command? Can a command be interrupted by another meta-command (e.g., when an agent executing "kill the dragon" that could take longer than 20 seconds receives other messages every 20 secs)?
How are the resources in each cluster used? Are the 63000 CPUs used to, e.g., run about 2.3 episodes each to feed the 560 GPUs?
Clarity
Notation is sometimes inconsistent. For example, is
V
(
h
)
=
V
t
m
c
(
o
t
,
C
t
)
? Do we have one state-action value network per human and agent sending messages, right? Does
Q
(
h
,
m
′
)
refer to each
Q
t
i
(
o
t
,
C
t
,
m
t
i
)
for
i
=
1
…
M
+
N
or to a single loss after combining the different attention heads?
Over which distribution is taken the expected value in the definition of the value of the meta-command,
L
V
? In other words what are
S
and
C
in
V
ω
k
(
S
,
C
)
I understand
C
is the meta-command candidate set, is
S
the current state?
Minor comments
What does “physical” in a "physical computer cluster" mean?
Could you give some insight on how the gating mechanism works for completeness? Does the attention mechanism establish which meta-commands are more relevant for the current observation? |
ICLR | Title
Rethinking learning rate schedules for stochastic optimization
Abstract
There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation. Recent results, such as in the ’super-convergence’ methods which use oscillating learning rates, serve to emphasize this point even more. One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the “cut the learning rate every constant number of epochs” method (which more closely resembles an exponentially decaying learning rate schedule); note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be (worst case) optimal for classes of convex optimization problems. The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective. In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is suboptimal compared to the statistical minimax rate (by a factor of condition number); in contrast the “cut the learning rate every constant number of epochs” provides an exponential improvement (depending only logarithmically on the condition number) compared to any polynomial decay scheme. Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization (where we have circumvented minimax lower bounds for more general convex optimization problems)? Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.
1 INTRODUCTION
The recent advances in machine learning and deep learning rely almost exclusively on stochastic optimization methods, primarily SGD and its variants. Here, these large scale stochastic optimization methods are manually (and often painstakingly) tuned to the problem at hand (often with parallelized hyper-parameter searches), where there is, as of yet, no class of “universal methods” which uniformly work well on a wide range of problems with little to no hyper-parameter tuning. This is in stark contrast to non-stochastic numerical optimization methods, where it is not an overstatement to argue that the l-BFGS and non-linear conjugate gradient methods (with no hyper-parameter tuning whatsoever) have provided nearly unbeatable procedures (for a number of decades) on nearly every unconstrained convex and non-convex problem. In the land of stochastic optimization, there are two dominant (and somewhat compatible approaches): those methods which often manually tune learning rate schedules to achieve the best performance (Krizhevsky et al., 2012; Sutskever et al., 2013; Kingma & Ba, 2014; Kidambi et al., 2018) and those methods which rely on various forms of approximate preconditioning (Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2014). This works examines the former class of methods, where we seek a more refined understanding of the issues of learning rate scheduling, through both theoretical analysis and empirical studies.
Learning rate schedules for SGD is a rather enigmatic topic since there is a stark disparity between what is considered admissible in theory and what is employed in practice to achieve the best re-
sults. Let us elaborate on this distinction more clearly. In theory, a vast majority of works starting with Robbins & Monro (1951); Polyak & Juditsky (1992) consider learning rates that have the form of ηt = ab+tα for some a, b ≥ 0 and 1/2 < α ≤ 1 – we call these polynomial decay schemes. The key property enjoyed by these polynomial decay schemes is that they are not summable but are square summable. A number of works obtain bounds on the asymptotic convergence rates of such schemes. Note that the focus of these works is to design learning rate schemes that work well for all large values of t. In contrast, practitioners are interested in achieving the best performance given a computational budget or equivalently a fixed time horizon T e.g., 100 passes on training dataset with a batch size of 128.
The corresponding practically best performing learning rate scheme is often one where the step size is cut by a constant factor once every few epochs, or, equivalently, when no progress is made on a validation set (Krizhevsky et al., 2012; He et al., 2016b) (often called a dev set based decay scheme). Such schemes are widely popular to the extent that they are available as schemes in deep learning libraries such as PyTorch 1 and several such useful tools of the trade are taught on popular deep learning courses 2. Furthermore, what is (often) puzzling (from a theory perspective) is the emphasis that is laid on “babysitting” the learning rates 3 to achieve the best performance. Why do practitioners use constant and cut learning rate schemes while most of the theory work routinely works with polynomial decaying schemes? Of course, implicit to this question is the view that both of these schemes are not equivalent. Indeed if both of these were equivalent, one could parameterize the learning rate as ab+tα and do hyperparameter search over a, b and α. In practice, this simply does not give results comparable to the constant and cut schemes.4 One potential explanation for this could be that, in the context of neural network training, local minima found by constant and cut schemes are of much better quality than those found by polynomial decay schemes, while for convex problems, polynomial decay schemes are indeed optimal.
The primary contribution of this work is to show that this is simply not the case. We concretely show how minimax optimal theoretical learning rates (i.e. polynomial decay schemes for wide classes of convex optimization problems) may be misleading (and sub-optimal for locally quadratic problems), and the story in practice is more nuanced. There important issues at play with regards to this suboptimality. First, even for the simple case of stochastic linear regression, with a fixed time horizon, the rate achieved by any polynomial decay scheme (i.e., any choice of a, b and α) is suboptimal compared to the statistical minimax rate (i.e., information theoretically best possible rate achievable by any algorithm) by a factor of condition number κ (see Section 3 for definitions), while there exist constant and cut schemes that are suboptimal only by a factor of log κ.
Second, this work shows that a factor of κ suboptimality is unavoidable if we wish to bound the error of each iterate of SGD. In other words, we show that the convergence rate of lim sup of the error, as t→∞, has to be necessarily suboptimal by a factor of Ω̃(κ) compared to the statistical minimax rate, for any learning rate sequence (polynomial or not). In fact, at least Ω̃1/κ fraction of the iterates have this suboptimality. With this result, things become quite clear – all the works in stochastic approximation try to bound the error of each iterate of SGD asymptotically (or lim sup of the error in other words). Since this necessarily has to be suboptimal by a factor of Ω̃(κ) compared to the statistical minimax rates, the suboptimality of polynomial decay rates is not an issue. However, with a fixed time horizon, there exist learning rate schemes with much better convergence rates, while polynomial decay schemes fail to get better rates in this simpler setting (of known time horizon).
Thirdly, the work shows that, for stochastic linear regression, if we consider lim inf (rather than lim sup) of the error, it is possible to design schemes that are suboptimal by only a factor of log κ compared to the minimax rates. Variants of the constant and cut schemes achieve this guarantee.
In summary, the contributions of this paper are showing how widely used pratical learning rate schedules are, in fact, highly effective even in the convex case. In particular, our theory and empirical results demonstrate this showing that:
1https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler. ReduceLROnPlateau
2http://cs231n.github.io/ 3http://cs231n.github.io/neural-networks-3/ 4In fact, this work shows an instance where there is a significant (provable) difference between the perfor-
mance of these two schemes.
• For a fixed time horizon, constant and cut schemes are provably, significantly better than polynomial decay schemes. • There is a fundamental difference between fixed time horizon and infinite time horizon. • The above difference can be mitigated by considering lim inf of error instead of lim sup. • In addition to our theoretical contributions, we empirically verify the above claims for
neural network training on cifar-10.
Extending results on the performance of constant and cut schemes to more general convex optimization problems, beyond stochastic linear regression, is an important future direction. However, the fact that the suboptimality of polynomial decay schemes even for the simple case of stochastic linear regression, has not been realized after decades of research on stochastic approximation is striking.
In summary, the results of this paper show that, even for stochastic linear regression, the popular in practice, constant and cut learning rate schedules are provably better than polynomial decay schemes popular in theory and that there is a need to rethink learning rate schemes and convergence guarantees for stochastic approximation. Our results also suggest that current approaches to hyperparameter tuning of learning rate schedules might not be right headed and further suggest potential ways of improving them.
Paper organization: The paper is organized as follows. We review related work in Section 2. Section 3 describes the notation and problem setup. Section 4 presents our results on the suboptimality of both polynomial decay schemes and constant and cut schemes. Section 5 presents results on infinite horizon setting. Section 6 presents experimental results and Section 7 concludes the paper.
2 RELATED WORK
We will split related work into two parts, one based on theory and the other based on practice.
Related efforts in theory: SGD and the problem of stochastic approximation was introduced in the seminal work of Robbins & Monro (1951); this work also elaborates on stepsize schemes that are satisfied by asymptotically convergent stochastic gradient methods: we refer to these schemes as “convergent” stepsize sequences. The (asymptotic) statistical optimality of iterate averaged SGD with larger stepsize schemes ofO(1/nα) with α ∈ (0.5, 1) was proven in the seminal works of Ruppert (1988); Polyak & Juditsky (1992). The notions of convergent learning rate schemes in stochastic approximation literature has been studied in great detail (Ljung et al., 1992; Kushner & Yin, 2003; Bharath & Borkar, 1999; Lai, 2003). Nearly all of the aforementioned works rely on function value sub-optimality to measure convergence and rely on the notion of asymptotic convergence (i.e. in the limit of the number of updates of SGD tending to infinity) to derive related “convergent stepsize schedules”. Along this line of thought, there are several efforts that prove (minimax) optimality of the aforementioned rates (in a worst case sense and not per problem sense) e.g., Nemirovsky & Yudin (1983); Raginsky & Rakhlin (2011); Agarwal et al. (2012).
An alternative viewpoint is to consider gradient norm as a means to measure the progress of an algorithm. Along this line of thought are several works including the stochastic process viewpoint considered by Polyak & Juditsky (1992) and more recently, the work of Nesterov (2012) (working with deterministic (exact) gradients). The work of Allen-Zhu (2018) considers questions relating to making the gradient norm small when working with stochastic gradients, and provides an improved rate. We return to this criterion in Section 7.
In terms of oracle models, note that both this paper, as well as other results (Lacoste-Julien et al., 2012; Rakhlin et al., 2012; Bubeck, 2014), work in an oracle model that assumes bounded variance of stochastic gradients or similar assumptions. There is an alternative oracle model for analyzing SGD as followed in papers includingBach & Moulines (2013); Bach (2014); Jain et al. (2017) which is arguably more reflective of SGD’s behavior in practice. For more details, refer to Jain et al. (2017). It is an important direction to prove the results of this paper working in the alternative practically more applicable oracle model.
Efforts in practice: As highlighted in the introduction, practical efforts in stochastic optimization have diverged from the classical theory of stochastic approximation, with several deep learning
libraries like pytorch 5 providing unconventional alternatives such as cosine/sawtooth/dev set decay schemes, or even exponentially decaying learning rate schemes. In fact, a natural scheme used in training convolutional neural networks for vision is where the learning rate is cut by a constant factor after a certain number of epochs. Such schemes are essentially discretized variants of exponentially decaying learning rate schedules. We note that there are other learning rate schedules that have been recently proposed such as sgd with warm restarts (Loshchilov & Hutter, 2016), oscillating learning rates (Smith & Topin, 2017) etc., that are unconventional and have attracted a fair bit of attention. Furthermore, exponential learning rates appear to be considered in more recent NLP papers (see for e.g., Krishnamurthy et al. (2017)) 6.
3 PROBLEM SETUP
Notation: We represent scalars with normal font a, b, L etc., vectors with boldface lowercase characters a,b etc. and matrices with boldface uppercase characters A,B etc. We represent positive semidefinite (PSD) ordering between two matrices using . The symbol & represents that the direction of inequality holds for some universal constant.
Our theoretical results focus on the following additive noise stochastic linear regression problem. We present the setup and associated notation in this section. We wish to solve:
min w∈Rd
f(w) where f(w) def= 1
2 w>Hw −w>b
for some positive definite matrix H and vector b.7 We denote the smallest and largest eigenvalues of H by µ > 0 and L > 0. κ def= Lµ denotes the condition number of H. We have access to a stochastic gradient oracle which gives us ∇̂f(w) = ∇f(w) + e, where e is a random vector satisfying8 E [e] = 0 and E [ ee> ] = σ2H.
Given an initial point w0 and step size sequence ηt, the SGD algorithm proceeds with the update
wt = wt−1 − ηt∇̂f(wt−1) = wt−1 − ηt (∇f(wt−1) + et) , where et are independent for various t and satisfy the above mean and variance conditions.
Let w∗ def= arg minw∈Rd f(w). The suboptimality of a point w is given by f(w) − f(w∗). It is well known that given t accesses to the stochastic gradient oracle above, any algorithm that uses these stochastic gradients and outputs ŵt has suboptimality that is lower bounded by σ 2d t . More concretely (Van der Vaart, 2000), we have that
lim t→∞ E [f(ŵt)]− f(w∗) σ2d/t ≥ 1 .
Moreover there exist schemes that achieve this rate of (1 + o(1)) σ 2d t e.g., constant step size SGD with averaging (Polyak & Juditsky, 1992). This rate of σ2d/t is called the statistical minimax rate.
4 COMPARISON BETWEEN POLYNOMIAL DECAY SCHEMES VS CONSTANT AND CUT SCHEMES
In this section, we will show that polynomial decay schemes are suboptimal compared to the statistical minimax rate by at least a factor of κ while constant and cut schemes are suboptimal by at most a factor of log κ.
5see https://pytorch.org/docs/stable/optim.html for a complete list of alternatives 6Refer to their JSON file https://github.com/allenai/allennlp/blob/master/
training_config/wikitables_parser.jsonnet 7Any linear least squares 1
2n ∑n i=1 ( x>i w − yi )2 can be written as above with H def= 1 n ∑ i xix > i and
b def = 1
n ∑ i yixi.
8While this might seem very special, this is indeed a fairly natural scenario. For instance, in stochastic linear regression with independent additive noise, i.e., yt = x>t w∗+ t where t is a random variable independent of xt and E [ t] = 0 and E [ 2t ] = σ2, the noise in the gradient has this property. On the other hand, the results in
this paper can also be generalized to the setting where E [ ee> ] = V for some arbitrary matrix V. However, error covariance of σ2H significantly simplifies exposition.
4.1 SUBOPTIMALITY OF POLYNOMIAL DECAY SCHEMES
Our first result shows that there exist problem instances where all polynomial decay schemes i.e., those of the form ab+tα , for any choice of a, b and α are suboptimal by at least a factor of Ω(κ) compared to the statistical minimax rate. Theorem 1. There exists a problem instance such that the initial function value f(w0) ≤ σ2d, and for any fixed time T satisfying T ≥ κ2, for all a, b ≥ 0 and 0.5 ≤ α ≤ 1, and for the learning rate scheme ηt = ab+tα , we have E [f(wT )]− f(w ∗) ≥ κ32 · σ2d T .
4.2 SUBOPTIMALITY OF CONSTANT AND CUT SCHEME
Our next result shows that there exist constant and cut schemes that achieve statistical minimax rate upto a multiplicative factor of only log κ log2 T . Theorem 2. For any problem and fixed time horizon T > κ log(κ), there exists a constant and cut learning rate scheme that achieves E [f(wT )]− f(w∗) ≤ f(w0)−f(w ∗) T 3 + 2 log κ · log 2 T · σ 2d T .
We will now consider an exponential decay scheme (in contrast to polynomial ones from Section 4.1) which is a smoother version of constant and cut scheme. We show that the same result above for constant and cut scheme can also be extended to the exponential decay scheme. Theorem 3. For any problem and fixed horizon T , there exist constants a and b such that learning rate scheme of ηt = b ·exp (−at) achieves E [f(wT )]−f(w∗) ≤ f(w0)−f(w ∗) T 2−(1/100) +log κ · log T · σ 2d T .
The above results show that constant and cut as well as exponential decay schemes, that depend on the time horizon, are much better than polynomial decay schemes. Between these, exponential decay schemes are smoother versions of constant and cut schemes, and so one would hope that they might have better performance than constant and cut schemes – we do see a log T difference in our bounds. One unsatisfying aspect of the above results is that the rate behaves as log TT , which is asymptotically worse than the statistical rate of 1T . It turns out that it is indeed possible to improve the rate to 1 T using a more sophisticated scheme. The main idea is to use constant and polynomial schemes in the beginning and then switch to constant and cut (or exponential decay) scheme later. To the best of our knowledge, these kind of schemes have never been considered in the stochastic optimization literature before. Using this learning rate sequence successively for increasing time horizons would lead to oscillating learning rates. We leave a complete analysis of oscillating learning rates (for moving time horizon) to future work. Theorem 4. Fix κ ≥ 2. For any problem and fixed time horizon T/ log T > 5κ, there exists a learning rate scheme that achieves E [f(wT )]− f(w∗) ≤ f(w0)−f(w ∗) T 3 + 50 log2 κ · σ2d T .
5 INFINITE HORIZON SETTING
In this section we show a fundamental limitation of the SGD algorithm. First we will prove that the SGD algorithm, for any learning rate sequence, needs to query a point with suboptimality more than Ω(κ/ log κ) · σ2d/T for infinitely many time steps T . Theorem 5. There exists a universal constant C > 0 such that for any SGD algorithm with ηt ≤ 1/2κ for all t9, we have lim supT→∞ E[f(wT )]−f(w∗) (σ2d/T ) ≥ κ C log(κ+1) .
Next we will show that in some sense the “fraction” of query points that has value more than τσ2/T is at least Ω(1/τ) when τ is smaller than the threshold in Theorem 5. Theorem 6. There exists universal constants C1, C2 > 0 such that for any τ ≤ κCC1 log(κ+1) where C is the constant in Theorem 5, for any SGD algorithm and any number of iteration T > 0 there exists a T ′ ≥ T such that for any T̃ ∈ [T ′, (1 + 1/C2τ)T ′] we have E[f(wT̃ )]−f(w ∗)
(σ2d/T̃) ≥ τ .
Finally, we now show that there are constant and cut or exponentially decaying schemes that achieve the statistical minimax rate up to a factor of log κ log2 T in the lim inf sense.
9Learning rate more than 2/κ will make the algorithm diverge.
Theorem 7. There exists an absolute constant C and a constant and cut learning rate scheme that obtains lim infT→∞ E[f(wT )]−f(w∗) (σ2d log2 T/T ) ≤ C log κ.
Similar results can be obtained for the exponential decay scheme of Theorems 3 and 4 with moving time horizon. However the resultant learning rates might have oscillatory behavior. This might partly explain the benefits of oscillating learning rates observed in practice (Smith & Topin, 2017).
6 EXPERIMENTAL RESULTS
We present experimental validation of our claims through controlled synthetic experiments on a twodimensional quadratic objective and on a real world non-convex optimization problem of training a residual network on the cifar-10 dataset, to illustrate the shortcomings of the traditional stochastic approximation perspective (and the advantages of non-convergent exponentially decaying and oscillating learning rate schemes) for a realistic problem encountered in practice. Complete details of experimental setup are given in Appendix D.
6.1 SYNTHETIC EXPERIMENTS: TWO-DIMENSIONAL QUADRATIC OBJECTIVE
We consider the problem of optimizing a two-dimensional quadratic objective, similar in spirit as what is considered in the theoretical results of this paper. In particular, for a two-dimensional quadratic, we have two eigenvalues, one of magnitude κ and the other being 1. We vary our condition number κ ∈ {50, 100, 200} and use a total of 200κ iterations for optimization. The results expressed in this section are obtained by averaging over two random seeds. The learning rate schemes we search over are:
ηt = η0
1 + b · t (1) ηt =
η0
1 + b √ t
(2) ηt = η0 · exp (−b · t). (3)
For the schemes detailed above, there are two parameters that need to be searched over: (i) the starting learning rate η0 and, (ii) the decay factor b. We perform a grid search over both these parameters and choose ones that yield the best possible final error at a given end time (i.e. 200κ). We also make sure to extend the grid should a best performing grid search parameter fall at the edge of the grid so that all presented results lie in the interior of our final grid searched parameters.
We will present results for the following experiments: (i) behavior of the error of the final iterate of the SGD method with the three learning rate schemes (1),(2), and (3) as we vary the condition number, and (ii) how the exponentially decaying learning rate scheme (3) optimized for a shorter time horizon behaves for a longer horizon.
For the variation of the final iterate’s excess risk when considered with respect to the condition number (Figure 1), we note that polynomially decaying schemes have excess risk that scales linearly with condition number, corroborating Theorem 1. In contrast, exponentially decaying learning rate scheme admits excess risk that nearly appears to be a constant and corroborates Theorem 3. Finally, we note that the learning rate schedule that offers the best possible error in 50κ or 100κ steps does not offer the best error at 200κ steps (Table 1).
6.2 NON-CONVEX OPTIMIZATION: TRAINING A RESIDUAL NET ON CIFAR-10
We consider here the task of training a 44−layer deep residual network (He et al., 2016b) with pre-activation blocks (He et al., 2016a) (dubbed preresnet-44) for classifying images in the cifar10 dataset. The code for implementing the network employed in this paper can be found here10. For all the experiments, we use the Nesterov’s Accelerated gradient method (Nesterov, 1983) implemented in pytorch 11 with a momentum set to 0.9 and batchsize set to 128, total number of training epochs set to 100, `2 regularization set to 0.0005.
Our experiments are based on grid searching for the best learning rate decay scheme on four parametric family of learning rate schemes described above 1,2,3; all gridsearches are performed on a separate validation set (obtained by setting aside one-tenth of the training dataset = 5000 images) and with models trained on the remaining 45000 images. For presenting the final numbers in the plots/tables, we employ the best hyperparameters from the validation stage and train it on the entire 50, 000 images and average results run with 10 different random seeds. The parameters for gridsearches and related details are presented in Appendix D. Furthermore, just as with the synthetic experiments, we always extend the grid so that the best performing grid search parameter lies in the interior of our grid search.
Comparison between different schemes: Figure 2 and Table 2 present a comparison of the performance of the three schemes (1)-(3). They clearly demonstrate that the best exponential scheme outperforms the best polynomial schemes.
Hyperparameter selection using truncated runs: Figure 3 and Tables 3 and 4 present a comparison of the performance of three exponential decay schemes each of which has the best performance at 33, 66 and 100 epochs respectively. The key point to note is that best performing hyperparameters at 33 and 66 epochs are not the best performing at 100 epochs (which is made stark from the perspective of the validation error). This demonstrates that selecting hyper parameters using truncated runs, which has been proposed in some recent efforts such as hyperband (Li et al., 2017), might necessitate rethinking.
10https://github.com/D-X-Y/ResNeXt-DenseNet 11https://github.com/pytorch
7 CONCLUSIONS AND DISCUSSION
The main contribution of this work shows that the picture of learning rate scheduling is far more nuanced than suggested by prior theoretical results, where we do not even need to move to nonconvex optimization to show other learning rate schemes can be far more effective than the standard polynomially decaying rates considered in theory.
Is quadratic loss minimization special? One may ask if there is something particularly special about why the minimax rates are different for quadratic loss minimization as opposed to more general convex (and non-convex) optimization problems? Ideally, we would hope that our theoretical insights (and improvements) can be formally established in more general cases. Here, an alternative viewpoint is to consider gradient norm as a means to measure the progress of an algorithm. The recent work of Allen-Zhu (2018) shows marked improvements for making the gradient norm small (when working with stochastic gradients) for both convex and non-convex, in comparison to prior results. In particular, for the strongly convex case, Allen-Zhu (2018) provides results which have only a logarithmic dependency on κ, an exponential improvement over what is implied by standard analyses for the gradient norm (Lacoste-Julien et al., 2012; Rakhlin et al., 2012; Bubeck, 2014); Allen-Zhu (2018) also provides improvements for the smooth and non-convex cases. Thus, for the case of making the gradient norm small, there does not appear to be a notable discrepancy between the minimax rate of quadratic loss minimization in comparison to more general strongly convex (or smooth) convex optimization problems. Interestingly, the algorithm of Allen-Zhu (2018) provides a recursive regularization procedure that obtains an SGD procedure, where the doubling regularization can be viewed as being analogous to an exponentially decaying learning rate schedule. Further work in this direction may be promising in providing improved algorithms.
A PROOFS OF RESULTS IN SECTION 4.1
Proof of Theorem 1. The problem instance is simple. Let H = κ . . . 1
. . . , where the first d 2 diagonal entries are equal to κ and the remaining d 2 diagonal entries are equal to 1 and all the off
diagonal entries are equal to zero. Let us denote by v(i)t def = E
[( w
(i) t − (w∗)
(i) )2]
the variance in
the ith direction at time step t. Let the initialization be such that v(i)0 = σ 2/κ for i = 1, 2, ..., d/2 and v(i)0 = σ 2 for i = d/2 + 1, ..., d. This means that the variances for all directions with eigenvalue κ remain equal as t progresses and similarly for all directions with eigenvalue 1. We have
v (1) T def = E
[( w
(1) T − (w
∗) (1) )2] = T∏ j=1 (1− ηjκ)2 v(1)0 + κσ2 T∑ j=1 η2j T∏ i=j+1 (1− ηiκ)2 and
v (d) T def = E
[( w
(d) T − (w
∗) (d) )2] = T∏ j=1 (1− ηj)2 v(d)0 + σ2 T∑ j=1 η2j T∏ i=j+1 (1− ηi)2 .
We consider a recursion for v(i)t with eigenvalue λi (1 or κ). By the design of the algorithm, we know
v (i) t+1 = (1− ηtλi)2v (i) t + λiσ 2η2t .
Let s(η, λ) = λσ 2η2
1−(1−ηλ)2 be the solution to the stationary point equation x = (1 − ηλ) 2 + λσ2η2.
Intuitively if we keep using the same learning rate η, then v(i)t is going to converge to s(η, λi). Also note that s(η, λ) ≈ σ2η/2 when ηλ 1. We first prove the following claim showing that eventually the variance in direction i is going to be at least s(ηT , λi).
Claim 1. Suppose s(ηt, λi) ≤ v(i)0 , then v (i) t ≥ s(ηt, λi).
Proof. We can rewrite the recursion as
v (i) t+1 − s(ηt, λi) = (1− ηtλi)2(v (i) t − s(ηt, λi)).
In this form, it is easy to see that the iteration is a contraction towards s(ηt, λi). Further, v (i) t+1 − s(ηt, λi) and v (i) t − s(ηt, λi) have the same sign. In particular, let t0 be the first time such that s(ηt, λi) ≤ v(i)0 (note that ηt is monotone and so is s(ηt, λi)), it is easy to see that v (i) t ≥ v (i) 0 when t ≤ t0. Therefore we know v(i)t0 ≥ s(ηt0 , λi), by the recursion this implies v (i) t0+1
≥ s(ηt0 , λi) ≥ s(ηt0+1, λi). The claim then follows from a simple induction.
If s(ηT , λi) ≥ v(i)0 for i = 1 or i = d then the error is at least σ2d/2 ≥ κσ2d/T and we are done. Therefore we must have s(ηT , κ) ≤ v(1)0 = σ2/κ, and by Claim 1 we know v (1) T ≥ s(ηT , κ) ≥ σ2ηT /2. The function value is at least
E [f(wT )] ≥ d
2 · κv(1)T ≥ dκσ2ηT 4 .
To make sure E [f(wT )] ≤ dκσ 2ηT 32 we must have ηT ≤ 1
8T . Next we will show that when this happens, v(d)T must be large so the function value is still large.
We will consider two cases, in the first case, b ≥ Tα. Since 18T ≥ ηT = a b+Tα ≥ a 2b , we have a b ≤ 1 4T . Therefore v (d) T ≥ (1 − a b ) 2T v (d) 0 ≥ σ2/2, so the function value is at least E [f(wt)] ≥ d 2v (d) T ≥ dσ2 4 ≥ κdσ2 T , and we are done.
In the second case, b < Tα. Since 18T ≥ ηT = a b+Tα ≥ a 2Tα , we have a ≤ 0.25T α−1. The sum of learning rates satisfy
T∑ i=1 ηi ≤ T∑ i=1 a iα ≤ T∑ i=1 0.25i−1 ≈ 0.25 log T.
Here the second inequality uses the fact that Tα−1i−α ≤ i−1 when i ≤ T . Similarly, we also know∑T i=1 η 2 i ≤ ∑T i=1 0.25i
−2 ≤ π2/24. Using the approximation (1 − η)2 ≥ exp(−2η − 4η2) for η < 1/4, we get v(d)T ≥ exp(−2 ∑T i=1 ηi − 4 ∑T i=1 η 2 i )v (d) 0 ≥ σ2/5 √ T , so the function value is at least E [f(wt)] ≥ d2v (d) T ≥ dσ2 20 √ T ≥ κdσ 2
32T . This concludes the second case and proves the theorem.
B PROOFS OF RESULTS IN SECTION 4.2
Proof of Theorem 2. The learning rate scheme is as follows. Divide the total time horizon into log(κ) equal sized phases. In the `th phase, the learning rate to be used is log T ·log κ
2`·µ·T . Note that the learning rate in the first phase depends on strong convexity and that in the last phase depends on smoothness (since the last phase has ` = log κ). Recall the variance in the kth coordinate can be upper bounded by
v (k) T def = E
[( w
(k) T − (w
∗) (1) )2] ≤ T∏ j=1 ( 1− ηjλ(k) )2 v (1) 0 + λ (k)σ2 T∑ j=1 η2j T∏ i=j+1 ( 1− ηiλ(k) )2
≤ exp −2 T∑ j=1 ηjλ (k) v(1)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) . We will show that for every k, we have
v (k) T ≤ log κ · log T λ(k)T · σ2,
which directly implies the theorem. Now choose any k. Let `∗ denote the number satisfying 2` ∗ ·µ ≤ λ(k) < 2` ∗+1 · µ. Note that `∗ depends on k but we suppressed the dependence for notational simplicity.
v (k) T ≤ exp −2 T∑ j=1 ηjλ (k) v(k)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) ≤ exp ( −2 log T log κ
T · λ(k) · T log κ · 1 2`∗µ
)( v
(k) 0 + λ (k)σ2 · T log κ · `∗−1∑ `=1 η2`
)
+ λ(k)σ2 ( η2`∗ ·
1 1− exp ( −η`∗λ(k) ) + log κ∑ `=`∗+1 T log κ · ( log κ log T 2`µT )2)
≤ v (k) 0
T 3 + κσ2 · log
2 κ log2 T
µT 3 + η`∗σ
2 + σ2 · log κ log 2 T
T
log κ∑ `=`∗+1 1 2`µ
≤ v (k) 0
T 3 + σ2 · log κ log
2 T
T
( λ(k) + 1
2`∗µ ) ≤ v (k) 0
T 3 + 2 log κ · log2 T λ(k)T · σ2.
This finishes the proof.
Proof of Theorem 3. The learning rate scheme we consider is γt = γ0 · ct−1 with γ0 = log T/(µTe), Te = T/ log κ and c = (1 − 1/Te). Further, just as in previous lemmas, we consider a specific eigen direction λ(k) and write out the progress made along this direction by iteration say, T̂ ≤ T :
err(k) T̂ = T̂∏ t=1 (1− γtλ(k))2err(k)0 + (λ(k))2σ2 T̂∑ τ=1 γ2τ · T̂∏ t=τ+1 (1− γtλ(k))2
≤ exp −2λ(k)γ0 T̂∑ t=1 ct−1 err(k)0 + (λ(k))2σ2γ20c2 T̂∑ τ=1 c2τ exp −2λ(k)γ0 T̂∑ t=τ+1 ct−1 = exp ( −2λ
(k)γ0 1− c
· (1− cT̂ ) ) err(k)0 + (λ(k))2σ2γ20
c2
T̂∑ τ=1 c2τ exp ( −2λ (k)γ0 1− c · (cτ − cT̂ ) )
= exp ( 2λ(k)γ0 1− c · cT̂ ) · exp(−2λ(k)γ0 1− c ) err(k)0 + (λ(k))2σ2γ20 c2 T̂∑ τ=1 c2τ exp ( −2λ (k)γ0 1− c · (cτ ) )
Represent cτ = x and 2λ(k)γ0/(1 − c) = α. Substituting necessary values of the quantities, we have α = 2 log T · λ(k)/µ ≥ 2. Now, the second term is upper bounded using the corresponding integral, which is the following:
cT̂∑ x=c x2 exp (−αx) ≤ ∫ cT̂ 1 x2 exp (−αx)dx ≤ 1 α · ( 1 + 2 α + 2 α2 ) exp (−α).
Substituting this in the previous bound, we have:
err(k) T̂ ≤ exp
( −α(1− cT̂ ) ) · err(k)0 + (λ(k))2σ2γ20αc2 · ( 1 + √ 2 α )2 ≤ exp ( −α(1− cT̂ ) ) · ( err(k)0 + 16 (λ(k))2σ2γ20
α ) ≤ exp ( −α(1− cT̂ ) ) · ( err(k)0 + 16 (λ(k))2σ2 log T
µ2T 2e · 2(λ(k)/µ) ) = exp ( −α(1− cT̂ ) ) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e ) Now, setting T̂ = Te log(Cλ(k)/µ), and using 1 − a ≤ exp (−a), with C > 1 being some (large) universal constant, we have:
err(k) T̂ ≤ exp
( −2(λ(k)/µ) log T · (1− µ Cλ(k) ) ) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e ) ≤ 1 T 2−(1/C) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e
) (4)
Now, in order to argue the progress of the algorithm from T̂ + 1 to T , we can literally view the algorithm as starting with the iterate obtained from running for the first T̂ steps (thus satisfying the excess risk guarantee in equation 4) and then adding in variance by running for the duration of time between T̂ + 1 to T . For this part, we basically upper bound this behavior by first assuming that there is no contraction of the bias and then consider the variance introduced by running the algorithm from T̂ + 1 to T . This can be written as:
err(k)T ≤ T∏
t=T̂+1
(1− γtλ(k))2err(k)T̂ + (λ (k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2
≤ err(k) T̂ + (λ(k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2
Now, to consider bounding the variance of the process with decreasing sequence of learning rates, we will instead work with a constant learning rate and understand its variance:
(λ(k))2σ2 T−T̂∑ τ=1 γ2(1− γλ(k))2(T−T̂−τ) ≤ γ 2σ2(λ(k))2 (2− λ(k)γ)γλ(k)
≤ γσ2λ(k). What this implies in particular is that the variance is a monotonic function of the learning rate and thus the overall variance can be bounded using the variance of the process run with a learning rate of γT̂ . (λ(k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2 ≤ (λ(k))2σ2 T−T̂∑ t=τ+1 γ2 T̂ (1− γT̂λ (k))2 ≤ γT̂σ 2λ(k)
≤ log T µTe · µ Cλ(k) · σ2λ(k) ≤ σ 2 log T log κ
T Plugging this into equation 4 and summing over all directions, we have the desired result.
Proof of Theorem 4. The learning rate scheme is as follows.
We first break T into three equal sized parts. Let A = T/3 and B = 2T/3. In the first T/3 steps, we use a constant learning rate of 1/L. In the second T/3 steps, we use a polynomial decay learning rate ηA+t = 1µ(κ+t/2) . In the third T/3 steps, we break the steps into log2(κ) equal sized phases. In the `th phase, the learning rate to be used is 5 log2 κ 2`·µ·T . Note that the learning rate in the first phase depends on strong convexity and that in the last phase depends on smoothness (since the last phase has ` = log κ).
Recall the variance in the kth coordinate can be upper bounded by
v (k) T def = E
[( w
(k) T − (w
∗) (1) )2] ≤ T∏ j=1 ( 1− ηjλ(k) )2 v (1) 0 + λ (k)σ2 T∑ j=1 η2j T∏ i=j+1 ( 1− ηiλ(k) )2
≤ exp −2 T∑ j=1 ηjλ (k) v(1)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) . We will show that for every k, we have
v (k) T ≤
v (k) 0
T 3 +
50 log2 κ
λ(k)T · σ2., (5)
which directly implies the theorem.
We will consider the first T/3 steps. The guarantee that we will prove for these iterations is: for any t ≤ A, v(k)t ≤ (1− λ(k)/L)2tv (k) 0 + σ2 L .
This can be proved easily by induction. Clearly this is true when t = 0. Suppose it is true for t− 1, let’s consider step t. By recursion of v(k)t we know
v (k) t = (1− λ(k)/L)2v (k) t−1 + λ (k)σ2/L2
≤ (1− λ(k)/L)2tv(k)0 + σ2
L
( (1− λ(k)/L)2 + λ(k)/L ) ≤ (1− λ(k)/L)2tv(k)0 + σ2
L .
Here the second step uses induction hypothesis and the third step uses the fact that (1−x)2 +x ≤ 1 when x ∈ [0, 1]. In particular, since (1−λ(k)/L)2T/3 ≤ (1−1/κ)2T/3 ≤ (1−1/κ)3κ log T = 1/T 3, we know at the end of the first phase, v(k)A ≤ v (k) 0 /T 3 + σ 2 L .
In the second T/3 steps, the guarantee would be: for any t ≤ T/3, v(k)A+t ≤ v (k) 0 /T 3 + 2ηA+tσ 2.
We will again prove this by induction. The base case (t = 0) follows immediately from the guarantee for the first part. Suppose this is true for A + t − 1, let us consider A + t, again by recursion we know
v (k) A+t = (1− λ (k)ηA+t−1) 2v (k) A+t−1 + λ (k)σ2η2A+t−1 ≤ v(k)0 /T 3 + 2ηA+t−1σ2 ( (1− λ(k)ηA+t−1)2 + 1
2 λ(k)ηA+t−1 ) ≤ v(k)0 /T 3 + 2ηA+t−1σ2(1− 1
2 µηA+t−1) ≤ v(k)0 /T 3 + 2ηA+tσ2.
Here the last line uses the fact that 2ηA+t−1(1− 12µηA+t−1) ≤ 2ηA+tσ 2, which is easy to verify by our choice of η. Therefore, at the end of the second part, we have v(k)B ≤ v (k) 0 /T 3 + 2σ 2 µ(κ+T/6) .
Finally we will analyze the third part. Let T̂ = T/3 log2 κ, we will consider the variance v (k)
B+`T̂ at
the end of each phase. We will make the following claim by induction:
Claim 2. Suppose 2` · µ ≤ λ(k), then
v (k) B+`T̂ ≤ v(k)B exp(−3`) + 2T̂ η 2 `λ (k)σ2.
Proof. We will prove this by induction. When ` = 0, clearly we have v(k)B ≤ v (k) B so the claim is true. Suppose the claim is true for `− 1, we will consider what happens after the algorithm uses η` for T̂ steps. By the recursion of the variance we have
v (k) `T̂ ≤ v(k) (`−1)T̂ · exp(−2η` · λ(k)T̂ ) + T̂ η2`λ(k)σ2.
Since 2` · µ ≤ λ(k), we know exp(−2η` · λ(k)T̂ ) ≤ exp(−3). Therefore by induction hypothesis we have
v (k) B+`T̂ ≤ v(k)B exp(−3`) + exp(−3) · 2T̂ η 2 `−1λ (k) + T̂ η2`λ (k) ≤ v(k)B exp(−3`) + 2T̂ η 2 `λ (k).
This finishes the induction.
By Claim 2, Let `∗ denote the number satisfying 2` ∗ ·µ ≤ λ(k) < 2`∗+1 ·µ, by this choice we know µ/λ(k) ≥ 12 exp(−3` ?) we have
v (k) T ≤ v (k) B+`∗T̂ ≤ v(k)B exp(−3` ∗) + 2T̂ η2`∗λ (k)σ2
≤ v (k) 0
T 3 +
24σ2 λ(k)T + 50 log2 κ 3λ(k)T · σ2.
≤ v (k) 0
T 3 +
50 log2 κ
λ(k)T · σ2.
Therefore, the function value is bounded by E [f(wT )] = ∑d i=1 λ (k)v (k) T ≤ f(w0) T 3 + 50 log2 κ T · σ2d.
C PROOFS OF RESULTS IN SECTION 5
All of our counter-examples in this section are going to be the same simple function. Let H be a diagonal matrix with d/2 eigenvalues equal to κ and the other d/2 eigenvalues equal to 1. Intuitively, we will show that in order to have a small error in the first eigendirection (with eigenvalue κ), one need to set a small learning rate ηt which would be too small to achieve a small error in the second
eigendirection (with eigenvalue 1). As a useful tool, we will decompose the variance in the two directions corresponding to κ eigenvalue and 1 eigenvalue respectively as follows:
v (1) T def = E
[( w
(1) T − (w
∗) (1) )2] = T∏ j=1 (1− ηjκ)2 v(1)0 + κσ2 T∑ j=1 η2j T∏ i=j+1 (1− ηiκ)2
≥ exp −2 T∑ j=1 ηjκ v(1)0 + κσ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiκ and (6) v
(2) T def = E
[( w
(2) T − (w
∗) (2) )2] = T∏ j=1 (1− ηj)2 v(2)0 + σ2 T∑ j=1 η2j T∏ i=j+1 (1− ηi)2
≥ exp −2 T∑ j=1 ηj v(2)0 + σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηi . (7) Proof of Theorem 5. Fix τ = κ/C log(κ+ 1) where C is a universal constant that we choose later. We need to exhibit that the lim sup is larger than τ . For simplicity we will also round κ up to the nearest integer.
Let T be a given number. Our goal is to exhibit a T̃ > T such that flin(wT̃ ) (σ2/T̃) ≥ τ . Given the step size sequence ηt, consider the sequence of numbers T0 = T, T1, · · · , Tκ such that Ti is the first number that
1 κ ≤ Ti∑ t=Ti−1+1 ηt ≤ 3 κ .
Note that such a number always exists because all the step sizes are at most 2/κ. We will also let ∆i be Ti − Ti−1. Firstly, from (6) and (7), we see that ∑ t ηt = ∞. Otherwise, the bias will never decay to zero. If f(wTi−1+∆i) > τσ2d Ti−1+∆i for some i = 1, · · · , κ, we are done. If not, we obtain the following relations:
σ2 ∆1 ≤ σ2 ∆1∑ t=1 η2T0+t ≤ exp(3) κ · E [( w (1) T0+∆1 − (w∗)(1) )2] ≤ exp(3)flin(wT0+∆1) ≤ exp(3)τσ2 T0 + ∆1
⇒ T0 ≤ (exp(3)τ − 1) ∆1. Here the second inequality is based on (6). We will use C1 to denote exp(3). Similarly, we have
σ2 ∆2 ≤ σ2 ∆2∑ t=1 η2T1+t ≤ C1 κ E [( w (1) T1+∆2 − (w∗)(1) )2] ≤ C1flin(wT1+∆2) ≤ C1τσ 2 T1 + ∆2
⇒ T1 ≤ (C1τ − 1) ∆2 ⇒ T0 ≤ (C1τ − 1)2
C1τ ∆2.
Repeating this argument, we can show that
T = T0 ≤ (C1τ − 1)i
(C1τ) i−1 ∆i and Ti ≤
(C1τ − 1)j−i
(C1τ) j−i−1 ∆j ∀ i < j.
We will use i = 1 in particular, which specializes to
T1 ≤ (C1τ − 1)j−1
(C1τ) j−2 ∆j ∀ j ≥ 2.
Using the above inequality, we can lower bound the sum of ∆j as κ∑ j=2 ∆j ≥ T1 · κ∑ j=2 (C1τ) j−2 (C1τ − 1)j−1 ≥ T1 · 1 C1τ · κ∑ j=2 ( 1 + 1 C1τ )j−2 ≥ T1 · 1
C1τ · exp (κ/(C1τ)) . (8)
This means that
E [f(wTi)] ≥ d 2 · E [( w (2) Ti − (w∗)(2) )2] ≥ exp(−6)σ2d · ∆1∑ i=1 η2T+i
≥ exp(−6)σ 2d ∆1 ≥ exp(−6)σ 2d T1 ≥ exp (κ/(C1τ)− 3) C1τ · σ 2d∑κ j=2 ∆j ,
where we used (8) in the last step. Rearranging, we obtain
E [f(wTκ)] (σ2d/Tκ) ≥ exp (κ/(C1τ)− 3) C1τ .
If we choose a large enough C (e.g., 3C1), the right hand side is at least exp((C/C1) log(κ+1)−3) κ ≥ κ.
To prove Theorem 6, we rely on the following key lemma, which says if a query point wT is bad (in the sense that it has expected value more than 10τσ2d/T ), then it takes at least Ω(T/τ) steps to bring the error back down. Lemma 8. There exists universal constantsC1, C2 > 0 such that for any τ ≤ κCC1 log(κ+1) whereC is the constant in Theorem 5, suppose at step T , the query point wT satisfies f(wT ) ≥ C1τσ2d/T , then for all T̃ ∈ [T, (1 + 1C2τ )T ] we have E [f(wT̃ )] ≥ τσ 2d/T ≥ τσ2d/T̃ .
Proof of Lemma 8. Since f(wT ) ≥ C1τσ2d/T and f(wT ) = d 2 ( κ ( w (1) T − (w∗) (1) )2 + ( w (2) T − (w∗) (2) )2) , we know either ( w (1) T − (w∗) (1) )2 ≥
C1τσ 2/2κT or ( w (2) T − (w∗) (2) )2 ≥ C1τσ2/2T . Either way, we have a coordinate i with
eigenvalue λi (κ or 1) such that ( w (i) T − (w∗) (i) )2 ≥ C1τσ2/(2Tλi).
Similar as before, choose ∆ to be the first point such that
ηT+1 + ηT+2 + · · ·+ ηT+∆ ∈ [1/λi, 3/λi]. First, by (6) or (7), we know for any T ≤ T̃ ≤ T + ∆, E [( w (i)
T̃ − (w∗)(i)
)2] ≥
exp(−6)C1τσ2/(2λiT ) just by the first term. When we choose C1 to be large enough the contribution to function value by this direction alone is larger than τσ2/T . Therefore every query in [T, T + ∆] is still bad.
We will consider two cases based on the value of S2 := ∑T+∆ T̃=T+1 η2 T̃ .
If S2 ≤ C2τ/(λ2iT ) (where C2 is a large enough universal constant chosen later), then by CauchySchwartz we know
S2 ·∆ ≥ ( T+∆∑ T̃=T+1 ηT̃ ) 2 ≥ 1/λ2i .
Therefore ∆ ≥ T/C2τ , and we are done. If S2 > C2τ/(λ2iT ), by Equation (6) and (7) we know
E [(
w (i) T+∆ − (w
∗) (i) )2] ≥ σ2 T+∆∑ T̃=T+1 η2 T̃ exp −2λi T+∆∑ j=T̃+1 ηj ≥ exp(−6)σ2
T+∆∑ T̃=T+1 η2 T̃ ≥ exp(−6) · C2τσ2/(λ2iT ).
Here the first inequality just uses the second term in Equation (6) or (7), the second inequality is because ∑T+∆ j=T̃+1 ηj ≤ ∑T+∆ j=T+1 ηj ≤ 3/λi and the last inequality is just based on the value of S2. In this case as we can see as long as C2 is large enough, T + ∆ is also a point with E [f(wT+∆)] ≥
λiE [( w (i) T+∆ − (w∗) (i) )2] ≥ C1τσ2/(T + ∆), so we can repeat the argument there. Eventually
we either stop because we hit case 1: S2 ≤ C2τ/λ2iT or the case 2 S2 > C2τ/λ2iT happened more than T/C2τ times. In either case we know for any T̃ ∈ [T, (1 + 1/C2)T ] E [f(wT̃ )] ≥ τσ2/T ≥ τσ2/T̃ as the lemma claimed.
Theorem 6 is an immediate corollary of Theorem 5 and Lemma 8.
Proof of Theorem 7. This result follows by running the constant and cut scheme for a fixed time horizon T and then increasing the time horizon to κ ·T . The learning rate of the initial phase for the new T ′ = κ · T is 1/µT ′ = 1/µ · κT = 1/LT which is the final learning rate for time horizon T . Theorem 2 will then directly imply the current theorem.
D DETAILS OF EXPERIMENTAL SETUP
D.1 SYNTHETIC 2-D QUADRATIC EXPERIMENTS
As mentioned in the main paper, we consider three condition numbers namely κ ∈ {50, 100, 200}. We run all experiments for a total of 200κ iterations. The two eigenvalues of the Hessian are κ and 1 respectively, and noise level is σ2 = 1 and we average our results with two random seeds. All our grid search results are conducted on a 10× 10 grid of learning rates × decay factor and whenever a best run lands at the edge of the grid, the grid is extended so that we have the best run in the interior of the gridsearch.
For the O(1/t) learning rate, we search for decay parameter over 10−points logarithmically spaced between {1/(100κ), 3000/κ}. The starting learning rate is searched over 10 points logarithmically spaced between {1/(20κ), 1000/κ}.
For the O(1/ √
(t)) learning rate, the decay parameter is searched over 10 logarithmically spaced points between {100/κ, 200000.0/κ}. The starting learning rate is searched between {0.01, 2}. For the exponential learning rate schemes, the decay parameter is searched between {exp (−2/N), exp (−106/N)}. The learning rate is searched between {1/5000, 1/10}.
D.2 NON-CONVEX EXPERIMENTS ON CIFAR-10 DATASET WITH A 44-LAYER RESIDUAL NET
As mentioned in the main paper, for all the experiments, we use the Nesterov’s Accelerated gradient method (Nesterov, 1983) implemented in pytorch 12 with a momentum set to 0.9 and batchsize set to 128, total number of training epochs set to 100, `2 regularization set to 0.0005.
With regards to learning rates, we consider 10−values geometrically spaced as {1, 0.6, · · · , 0.01}. To set the decay factor for any of the schemes such as 1,2, and 3, we use the following rule. Suppose we have a desired learning rate that we wish to use towards the end of the optimization (say, something that is 100 times lower than the starting learning rate, which is a reasonable estimate of what is typically employed in practice), this can be used to obtain a decay factor for the corresponding decay scheme. In our case, we found it advantageous to use an additively spaced grid for the learning rate γt, i.e., one which is searched over a range {0.0001, 0.0002, · · · , 0.0009, 0.001, · · · , 0.009} at the 80th epoch, and cap off the minimum possible learning rate to be used to be 0.0001 to ensure that there is progress made by the optimization routine. For any of the experiments that yield the best performing gridsearch parameter that falls at the edge of the grid, we extend the grid to ensure that the finally chosen hyperparameter lies in the interior of the grid. All our gridsearches are run such that we separate a tenth of the training dataset as a validation set and train on the remaining 9/10th dataset. Once the best grid search parameter is chosen, we train on the entire training dataset and
12https://github.com/pytorch
evaluate on the test dataset and present the result of the final model (instead of choosing the best possible model found during the course of optimization). | 1. What are the main contributions and strengths of the paper regarding theoretical analysis?
2. How realistic is the noise model used by the authors compared to related literature?
3. Do the results hold for an arbitrary noise covariance matrix, and how does this impact the proofs?
4. Should the authors compare or discuss constant step size with averaging, and how would this affect the experiments?
5. Are there any presentation issues, such as unclear expectations, missing equation numbers, and minor typos? | Review | Review
This paper presents a theoretical study of different learning rate schedules. Its main result are statistical minimax lower bounds for both polynomial and constant-and-cut schemes.
I enjoyed reading the paper and I think the contributions in it shed some light in step size schedules that have shown to be useful in practice. I do have however some concerns that I hope the authors can address in their rebuttal. My initial rating is marginally below acceptance but I will gladly increase this rating if my concerns are addressed.
# Pros
* The paper is written in a way that's both clear and accessible.
* The Theoretical contributions are important, as they address the choice of step size in one of the most used optimization methods machine learning and are novel to the best of my knowledge.
* Due to time constraints, I only skimmed through the proofs, but results seem correct.
# Concerns
My biggest concern is that its unclear how realistic is their noise model. The authors assume that the noise in the stochastic gradients e verifies E[e e^T ] = \sigma H. While they claim that this is verified for problems like least squares, it is not clear to me that this is indeed the case. Related work like (Moulines and Bach, 2013) and (Flammarion and Bach, 2015) take the same setting but can only assume that the covariance of the noise is _bounded_ by a matrix of sigma times H. How do the authors obtain a much stronger condition on the noise covariance with the same assumption? I would be much more convinced with a proof in appendix clearly showing that the assumptions in footnote 8 imply the aforementioned covariance of the noise and a paragraph comparing their noise model with that of related literature like the aforementioned references (I'm not affiliated with any of that work).
Also, the authors claim that their results hold for an arbitrary noise covariance matrix but the proofs are all done with the specific \sigma H matrix. I don't think its OK to say "our results hold for a more general setting" without proof. If they do hold for a more general setting then the proofs should be done in the general setting. If not, it should only be mentioned as future work. Please edit that remark accordingly.
* The paper does not compare or discuss against constant step size with averaging, which has been shown to be theoretically optimal in some scenarios (see aforementioned papers). This should at least be mentioned, and ideally also included in experiments.
# Presentation issues
Clarity of the proofs can be improved. For example, in Theorem 1, the formula for v_T(1) and v_T(d) follow from a recurrence that is stated _below_ the formula, needing several passes to understand. The proofs could benefit from a pass on them to improve the flow.
It is never clear whether expectations are taken with respect to the full randomness of the algorithm or conditioned on previous randomness. The E in Eq. just-before-section-4 (please add equation numbers) is a full expectation while the E[e] should be conditioned on previous randomness. The expectation in footnote 8 is also unclear if its wrt to the stochasticity of the algorithm or the randomness in the data generating process.
No equation numbers makes it difficult to reference equations. Please add equation numbers so that reviewing is not more difficult than it should (and others can reference your work more precisely).
Other minor presentation issues include:
* Page 1: Why l-BFGS and not L-BFGS? the lowercase l makes it look like a 1.
* Page 2: There important -> There *are* important.
* Page 2: In fact, at least ... (missing parenthesis around Omega tilde).
* Page 4: "a stochastic gradient oracle which gives us" the second w should also be boldface.
* Page 4: I would have appreciated
* Page 11: "variance in the i-th" direction. It would be more correct to say in the i-th coordinate as otherwise it can be mistaken with the i-th update direction.
Update:
I am satisfied with the answers and have upgraded my rating. |
ICLR | Title
Rethinking learning rate schedules for stochastic optimization
Abstract
There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation. Recent results, such as in the ’super-convergence’ methods which use oscillating learning rates, serve to emphasize this point even more. One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the “cut the learning rate every constant number of epochs” method (which more closely resembles an exponentially decaying learning rate schedule); note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be (worst case) optimal for classes of convex optimization problems. The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective. In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is suboptimal compared to the statistical minimax rate (by a factor of condition number); in contrast the “cut the learning rate every constant number of epochs” provides an exponential improvement (depending only logarithmically on the condition number) compared to any polynomial decay scheme. Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization (where we have circumvented minimax lower bounds for more general convex optimization problems)? Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.
1 INTRODUCTION
The recent advances in machine learning and deep learning rely almost exclusively on stochastic optimization methods, primarily SGD and its variants. Here, these large scale stochastic optimization methods are manually (and often painstakingly) tuned to the problem at hand (often with parallelized hyper-parameter searches), where there is, as of yet, no class of “universal methods” which uniformly work well on a wide range of problems with little to no hyper-parameter tuning. This is in stark contrast to non-stochastic numerical optimization methods, where it is not an overstatement to argue that the l-BFGS and non-linear conjugate gradient methods (with no hyper-parameter tuning whatsoever) have provided nearly unbeatable procedures (for a number of decades) on nearly every unconstrained convex and non-convex problem. In the land of stochastic optimization, there are two dominant (and somewhat compatible approaches): those methods which often manually tune learning rate schedules to achieve the best performance (Krizhevsky et al., 2012; Sutskever et al., 2013; Kingma & Ba, 2014; Kidambi et al., 2018) and those methods which rely on various forms of approximate preconditioning (Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2014). This works examines the former class of methods, where we seek a more refined understanding of the issues of learning rate scheduling, through both theoretical analysis and empirical studies.
Learning rate schedules for SGD is a rather enigmatic topic since there is a stark disparity between what is considered admissible in theory and what is employed in practice to achieve the best re-
sults. Let us elaborate on this distinction more clearly. In theory, a vast majority of works starting with Robbins & Monro (1951); Polyak & Juditsky (1992) consider learning rates that have the form of ηt = ab+tα for some a, b ≥ 0 and 1/2 < α ≤ 1 – we call these polynomial decay schemes. The key property enjoyed by these polynomial decay schemes is that they are not summable but are square summable. A number of works obtain bounds on the asymptotic convergence rates of such schemes. Note that the focus of these works is to design learning rate schemes that work well for all large values of t. In contrast, practitioners are interested in achieving the best performance given a computational budget or equivalently a fixed time horizon T e.g., 100 passes on training dataset with a batch size of 128.
The corresponding practically best performing learning rate scheme is often one where the step size is cut by a constant factor once every few epochs, or, equivalently, when no progress is made on a validation set (Krizhevsky et al., 2012; He et al., 2016b) (often called a dev set based decay scheme). Such schemes are widely popular to the extent that they are available as schemes in deep learning libraries such as PyTorch 1 and several such useful tools of the trade are taught on popular deep learning courses 2. Furthermore, what is (often) puzzling (from a theory perspective) is the emphasis that is laid on “babysitting” the learning rates 3 to achieve the best performance. Why do practitioners use constant and cut learning rate schemes while most of the theory work routinely works with polynomial decaying schemes? Of course, implicit to this question is the view that both of these schemes are not equivalent. Indeed if both of these were equivalent, one could parameterize the learning rate as ab+tα and do hyperparameter search over a, b and α. In practice, this simply does not give results comparable to the constant and cut schemes.4 One potential explanation for this could be that, in the context of neural network training, local minima found by constant and cut schemes are of much better quality than those found by polynomial decay schemes, while for convex problems, polynomial decay schemes are indeed optimal.
The primary contribution of this work is to show that this is simply not the case. We concretely show how minimax optimal theoretical learning rates (i.e. polynomial decay schemes for wide classes of convex optimization problems) may be misleading (and sub-optimal for locally quadratic problems), and the story in practice is more nuanced. There important issues at play with regards to this suboptimality. First, even for the simple case of stochastic linear regression, with a fixed time horizon, the rate achieved by any polynomial decay scheme (i.e., any choice of a, b and α) is suboptimal compared to the statistical minimax rate (i.e., information theoretically best possible rate achievable by any algorithm) by a factor of condition number κ (see Section 3 for definitions), while there exist constant and cut schemes that are suboptimal only by a factor of log κ.
Second, this work shows that a factor of κ suboptimality is unavoidable if we wish to bound the error of each iterate of SGD. In other words, we show that the convergence rate of lim sup of the error, as t→∞, has to be necessarily suboptimal by a factor of Ω̃(κ) compared to the statistical minimax rate, for any learning rate sequence (polynomial or not). In fact, at least Ω̃1/κ fraction of the iterates have this suboptimality. With this result, things become quite clear – all the works in stochastic approximation try to bound the error of each iterate of SGD asymptotically (or lim sup of the error in other words). Since this necessarily has to be suboptimal by a factor of Ω̃(κ) compared to the statistical minimax rates, the suboptimality of polynomial decay rates is not an issue. However, with a fixed time horizon, there exist learning rate schemes with much better convergence rates, while polynomial decay schemes fail to get better rates in this simpler setting (of known time horizon).
Thirdly, the work shows that, for stochastic linear regression, if we consider lim inf (rather than lim sup) of the error, it is possible to design schemes that are suboptimal by only a factor of log κ compared to the minimax rates. Variants of the constant and cut schemes achieve this guarantee.
In summary, the contributions of this paper are showing how widely used pratical learning rate schedules are, in fact, highly effective even in the convex case. In particular, our theory and empirical results demonstrate this showing that:
1https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler. ReduceLROnPlateau
2http://cs231n.github.io/ 3http://cs231n.github.io/neural-networks-3/ 4In fact, this work shows an instance where there is a significant (provable) difference between the perfor-
mance of these two schemes.
• For a fixed time horizon, constant and cut schemes are provably, significantly better than polynomial decay schemes. • There is a fundamental difference between fixed time horizon and infinite time horizon. • The above difference can be mitigated by considering lim inf of error instead of lim sup. • In addition to our theoretical contributions, we empirically verify the above claims for
neural network training on cifar-10.
Extending results on the performance of constant and cut schemes to more general convex optimization problems, beyond stochastic linear regression, is an important future direction. However, the fact that the suboptimality of polynomial decay schemes even for the simple case of stochastic linear regression, has not been realized after decades of research on stochastic approximation is striking.
In summary, the results of this paper show that, even for stochastic linear regression, the popular in practice, constant and cut learning rate schedules are provably better than polynomial decay schemes popular in theory and that there is a need to rethink learning rate schemes and convergence guarantees for stochastic approximation. Our results also suggest that current approaches to hyperparameter tuning of learning rate schedules might not be right headed and further suggest potential ways of improving them.
Paper organization: The paper is organized as follows. We review related work in Section 2. Section 3 describes the notation and problem setup. Section 4 presents our results on the suboptimality of both polynomial decay schemes and constant and cut schemes. Section 5 presents results on infinite horizon setting. Section 6 presents experimental results and Section 7 concludes the paper.
2 RELATED WORK
We will split related work into two parts, one based on theory and the other based on practice.
Related efforts in theory: SGD and the problem of stochastic approximation was introduced in the seminal work of Robbins & Monro (1951); this work also elaborates on stepsize schemes that are satisfied by asymptotically convergent stochastic gradient methods: we refer to these schemes as “convergent” stepsize sequences. The (asymptotic) statistical optimality of iterate averaged SGD with larger stepsize schemes ofO(1/nα) with α ∈ (0.5, 1) was proven in the seminal works of Ruppert (1988); Polyak & Juditsky (1992). The notions of convergent learning rate schemes in stochastic approximation literature has been studied in great detail (Ljung et al., 1992; Kushner & Yin, 2003; Bharath & Borkar, 1999; Lai, 2003). Nearly all of the aforementioned works rely on function value sub-optimality to measure convergence and rely on the notion of asymptotic convergence (i.e. in the limit of the number of updates of SGD tending to infinity) to derive related “convergent stepsize schedules”. Along this line of thought, there are several efforts that prove (minimax) optimality of the aforementioned rates (in a worst case sense and not per problem sense) e.g., Nemirovsky & Yudin (1983); Raginsky & Rakhlin (2011); Agarwal et al. (2012).
An alternative viewpoint is to consider gradient norm as a means to measure the progress of an algorithm. Along this line of thought are several works including the stochastic process viewpoint considered by Polyak & Juditsky (1992) and more recently, the work of Nesterov (2012) (working with deterministic (exact) gradients). The work of Allen-Zhu (2018) considers questions relating to making the gradient norm small when working with stochastic gradients, and provides an improved rate. We return to this criterion in Section 7.
In terms of oracle models, note that both this paper, as well as other results (Lacoste-Julien et al., 2012; Rakhlin et al., 2012; Bubeck, 2014), work in an oracle model that assumes bounded variance of stochastic gradients or similar assumptions. There is an alternative oracle model for analyzing SGD as followed in papers includingBach & Moulines (2013); Bach (2014); Jain et al. (2017) which is arguably more reflective of SGD’s behavior in practice. For more details, refer to Jain et al. (2017). It is an important direction to prove the results of this paper working in the alternative practically more applicable oracle model.
Efforts in practice: As highlighted in the introduction, practical efforts in stochastic optimization have diverged from the classical theory of stochastic approximation, with several deep learning
libraries like pytorch 5 providing unconventional alternatives such as cosine/sawtooth/dev set decay schemes, or even exponentially decaying learning rate schemes. In fact, a natural scheme used in training convolutional neural networks for vision is where the learning rate is cut by a constant factor after a certain number of epochs. Such schemes are essentially discretized variants of exponentially decaying learning rate schedules. We note that there are other learning rate schedules that have been recently proposed such as sgd with warm restarts (Loshchilov & Hutter, 2016), oscillating learning rates (Smith & Topin, 2017) etc., that are unconventional and have attracted a fair bit of attention. Furthermore, exponential learning rates appear to be considered in more recent NLP papers (see for e.g., Krishnamurthy et al. (2017)) 6.
3 PROBLEM SETUP
Notation: We represent scalars with normal font a, b, L etc., vectors with boldface lowercase characters a,b etc. and matrices with boldface uppercase characters A,B etc. We represent positive semidefinite (PSD) ordering between two matrices using . The symbol & represents that the direction of inequality holds for some universal constant.
Our theoretical results focus on the following additive noise stochastic linear regression problem. We present the setup and associated notation in this section. We wish to solve:
min w∈Rd
f(w) where f(w) def= 1
2 w>Hw −w>b
for some positive definite matrix H and vector b.7 We denote the smallest and largest eigenvalues of H by µ > 0 and L > 0. κ def= Lµ denotes the condition number of H. We have access to a stochastic gradient oracle which gives us ∇̂f(w) = ∇f(w) + e, where e is a random vector satisfying8 E [e] = 0 and E [ ee> ] = σ2H.
Given an initial point w0 and step size sequence ηt, the SGD algorithm proceeds with the update
wt = wt−1 − ηt∇̂f(wt−1) = wt−1 − ηt (∇f(wt−1) + et) , where et are independent for various t and satisfy the above mean and variance conditions.
Let w∗ def= arg minw∈Rd f(w). The suboptimality of a point w is given by f(w) − f(w∗). It is well known that given t accesses to the stochastic gradient oracle above, any algorithm that uses these stochastic gradients and outputs ŵt has suboptimality that is lower bounded by σ 2d t . More concretely (Van der Vaart, 2000), we have that
lim t→∞ E [f(ŵt)]− f(w∗) σ2d/t ≥ 1 .
Moreover there exist schemes that achieve this rate of (1 + o(1)) σ 2d t e.g., constant step size SGD with averaging (Polyak & Juditsky, 1992). This rate of σ2d/t is called the statistical minimax rate.
4 COMPARISON BETWEEN POLYNOMIAL DECAY SCHEMES VS CONSTANT AND CUT SCHEMES
In this section, we will show that polynomial decay schemes are suboptimal compared to the statistical minimax rate by at least a factor of κ while constant and cut schemes are suboptimal by at most a factor of log κ.
5see https://pytorch.org/docs/stable/optim.html for a complete list of alternatives 6Refer to their JSON file https://github.com/allenai/allennlp/blob/master/
training_config/wikitables_parser.jsonnet 7Any linear least squares 1
2n ∑n i=1 ( x>i w − yi )2 can be written as above with H def= 1 n ∑ i xix > i and
b def = 1
n ∑ i yixi.
8While this might seem very special, this is indeed a fairly natural scenario. For instance, in stochastic linear regression with independent additive noise, i.e., yt = x>t w∗+ t where t is a random variable independent of xt and E [ t] = 0 and E [ 2t ] = σ2, the noise in the gradient has this property. On the other hand, the results in
this paper can also be generalized to the setting where E [ ee> ] = V for some arbitrary matrix V. However, error covariance of σ2H significantly simplifies exposition.
4.1 SUBOPTIMALITY OF POLYNOMIAL DECAY SCHEMES
Our first result shows that there exist problem instances where all polynomial decay schemes i.e., those of the form ab+tα , for any choice of a, b and α are suboptimal by at least a factor of Ω(κ) compared to the statistical minimax rate. Theorem 1. There exists a problem instance such that the initial function value f(w0) ≤ σ2d, and for any fixed time T satisfying T ≥ κ2, for all a, b ≥ 0 and 0.5 ≤ α ≤ 1, and for the learning rate scheme ηt = ab+tα , we have E [f(wT )]− f(w ∗) ≥ κ32 · σ2d T .
4.2 SUBOPTIMALITY OF CONSTANT AND CUT SCHEME
Our next result shows that there exist constant and cut schemes that achieve statistical minimax rate upto a multiplicative factor of only log κ log2 T . Theorem 2. For any problem and fixed time horizon T > κ log(κ), there exists a constant and cut learning rate scheme that achieves E [f(wT )]− f(w∗) ≤ f(w0)−f(w ∗) T 3 + 2 log κ · log 2 T · σ 2d T .
We will now consider an exponential decay scheme (in contrast to polynomial ones from Section 4.1) which is a smoother version of constant and cut scheme. We show that the same result above for constant and cut scheme can also be extended to the exponential decay scheme. Theorem 3. For any problem and fixed horizon T , there exist constants a and b such that learning rate scheme of ηt = b ·exp (−at) achieves E [f(wT )]−f(w∗) ≤ f(w0)−f(w ∗) T 2−(1/100) +log κ · log T · σ 2d T .
The above results show that constant and cut as well as exponential decay schemes, that depend on the time horizon, are much better than polynomial decay schemes. Between these, exponential decay schemes are smoother versions of constant and cut schemes, and so one would hope that they might have better performance than constant and cut schemes – we do see a log T difference in our bounds. One unsatisfying aspect of the above results is that the rate behaves as log TT , which is asymptotically worse than the statistical rate of 1T . It turns out that it is indeed possible to improve the rate to 1 T using a more sophisticated scheme. The main idea is to use constant and polynomial schemes in the beginning and then switch to constant and cut (or exponential decay) scheme later. To the best of our knowledge, these kind of schemes have never been considered in the stochastic optimization literature before. Using this learning rate sequence successively for increasing time horizons would lead to oscillating learning rates. We leave a complete analysis of oscillating learning rates (for moving time horizon) to future work. Theorem 4. Fix κ ≥ 2. For any problem and fixed time horizon T/ log T > 5κ, there exists a learning rate scheme that achieves E [f(wT )]− f(w∗) ≤ f(w0)−f(w ∗) T 3 + 50 log2 κ · σ2d T .
5 INFINITE HORIZON SETTING
In this section we show a fundamental limitation of the SGD algorithm. First we will prove that the SGD algorithm, for any learning rate sequence, needs to query a point with suboptimality more than Ω(κ/ log κ) · σ2d/T for infinitely many time steps T . Theorem 5. There exists a universal constant C > 0 such that for any SGD algorithm with ηt ≤ 1/2κ for all t9, we have lim supT→∞ E[f(wT )]−f(w∗) (σ2d/T ) ≥ κ C log(κ+1) .
Next we will show that in some sense the “fraction” of query points that has value more than τσ2/T is at least Ω(1/τ) when τ is smaller than the threshold in Theorem 5. Theorem 6. There exists universal constants C1, C2 > 0 such that for any τ ≤ κCC1 log(κ+1) where C is the constant in Theorem 5, for any SGD algorithm and any number of iteration T > 0 there exists a T ′ ≥ T such that for any T̃ ∈ [T ′, (1 + 1/C2τ)T ′] we have E[f(wT̃ )]−f(w ∗)
(σ2d/T̃) ≥ τ .
Finally, we now show that there are constant and cut or exponentially decaying schemes that achieve the statistical minimax rate up to a factor of log κ log2 T in the lim inf sense.
9Learning rate more than 2/κ will make the algorithm diverge.
Theorem 7. There exists an absolute constant C and a constant and cut learning rate scheme that obtains lim infT→∞ E[f(wT )]−f(w∗) (σ2d log2 T/T ) ≤ C log κ.
Similar results can be obtained for the exponential decay scheme of Theorems 3 and 4 with moving time horizon. However the resultant learning rates might have oscillatory behavior. This might partly explain the benefits of oscillating learning rates observed in practice (Smith & Topin, 2017).
6 EXPERIMENTAL RESULTS
We present experimental validation of our claims through controlled synthetic experiments on a twodimensional quadratic objective and on a real world non-convex optimization problem of training a residual network on the cifar-10 dataset, to illustrate the shortcomings of the traditional stochastic approximation perspective (and the advantages of non-convergent exponentially decaying and oscillating learning rate schemes) for a realistic problem encountered in practice. Complete details of experimental setup are given in Appendix D.
6.1 SYNTHETIC EXPERIMENTS: TWO-DIMENSIONAL QUADRATIC OBJECTIVE
We consider the problem of optimizing a two-dimensional quadratic objective, similar in spirit as what is considered in the theoretical results of this paper. In particular, for a two-dimensional quadratic, we have two eigenvalues, one of magnitude κ and the other being 1. We vary our condition number κ ∈ {50, 100, 200} and use a total of 200κ iterations for optimization. The results expressed in this section are obtained by averaging over two random seeds. The learning rate schemes we search over are:
ηt = η0
1 + b · t (1) ηt =
η0
1 + b √ t
(2) ηt = η0 · exp (−b · t). (3)
For the schemes detailed above, there are two parameters that need to be searched over: (i) the starting learning rate η0 and, (ii) the decay factor b. We perform a grid search over both these parameters and choose ones that yield the best possible final error at a given end time (i.e. 200κ). We also make sure to extend the grid should a best performing grid search parameter fall at the edge of the grid so that all presented results lie in the interior of our final grid searched parameters.
We will present results for the following experiments: (i) behavior of the error of the final iterate of the SGD method with the three learning rate schemes (1),(2), and (3) as we vary the condition number, and (ii) how the exponentially decaying learning rate scheme (3) optimized for a shorter time horizon behaves for a longer horizon.
For the variation of the final iterate’s excess risk when considered with respect to the condition number (Figure 1), we note that polynomially decaying schemes have excess risk that scales linearly with condition number, corroborating Theorem 1. In contrast, exponentially decaying learning rate scheme admits excess risk that nearly appears to be a constant and corroborates Theorem 3. Finally, we note that the learning rate schedule that offers the best possible error in 50κ or 100κ steps does not offer the best error at 200κ steps (Table 1).
6.2 NON-CONVEX OPTIMIZATION: TRAINING A RESIDUAL NET ON CIFAR-10
We consider here the task of training a 44−layer deep residual network (He et al., 2016b) with pre-activation blocks (He et al., 2016a) (dubbed preresnet-44) for classifying images in the cifar10 dataset. The code for implementing the network employed in this paper can be found here10. For all the experiments, we use the Nesterov’s Accelerated gradient method (Nesterov, 1983) implemented in pytorch 11 with a momentum set to 0.9 and batchsize set to 128, total number of training epochs set to 100, `2 regularization set to 0.0005.
Our experiments are based on grid searching for the best learning rate decay scheme on four parametric family of learning rate schemes described above 1,2,3; all gridsearches are performed on a separate validation set (obtained by setting aside one-tenth of the training dataset = 5000 images) and with models trained on the remaining 45000 images. For presenting the final numbers in the plots/tables, we employ the best hyperparameters from the validation stage and train it on the entire 50, 000 images and average results run with 10 different random seeds. The parameters for gridsearches and related details are presented in Appendix D. Furthermore, just as with the synthetic experiments, we always extend the grid so that the best performing grid search parameter lies in the interior of our grid search.
Comparison between different schemes: Figure 2 and Table 2 present a comparison of the performance of the three schemes (1)-(3). They clearly demonstrate that the best exponential scheme outperforms the best polynomial schemes.
Hyperparameter selection using truncated runs: Figure 3 and Tables 3 and 4 present a comparison of the performance of three exponential decay schemes each of which has the best performance at 33, 66 and 100 epochs respectively. The key point to note is that best performing hyperparameters at 33 and 66 epochs are not the best performing at 100 epochs (which is made stark from the perspective of the validation error). This demonstrates that selecting hyper parameters using truncated runs, which has been proposed in some recent efforts such as hyperband (Li et al., 2017), might necessitate rethinking.
10https://github.com/D-X-Y/ResNeXt-DenseNet 11https://github.com/pytorch
7 CONCLUSIONS AND DISCUSSION
The main contribution of this work shows that the picture of learning rate scheduling is far more nuanced than suggested by prior theoretical results, where we do not even need to move to nonconvex optimization to show other learning rate schemes can be far more effective than the standard polynomially decaying rates considered in theory.
Is quadratic loss minimization special? One may ask if there is something particularly special about why the minimax rates are different for quadratic loss minimization as opposed to more general convex (and non-convex) optimization problems? Ideally, we would hope that our theoretical insights (and improvements) can be formally established in more general cases. Here, an alternative viewpoint is to consider gradient norm as a means to measure the progress of an algorithm. The recent work of Allen-Zhu (2018) shows marked improvements for making the gradient norm small (when working with stochastic gradients) for both convex and non-convex, in comparison to prior results. In particular, for the strongly convex case, Allen-Zhu (2018) provides results which have only a logarithmic dependency on κ, an exponential improvement over what is implied by standard analyses for the gradient norm (Lacoste-Julien et al., 2012; Rakhlin et al., 2012; Bubeck, 2014); Allen-Zhu (2018) also provides improvements for the smooth and non-convex cases. Thus, for the case of making the gradient norm small, there does not appear to be a notable discrepancy between the minimax rate of quadratic loss minimization in comparison to more general strongly convex (or smooth) convex optimization problems. Interestingly, the algorithm of Allen-Zhu (2018) provides a recursive regularization procedure that obtains an SGD procedure, where the doubling regularization can be viewed as being analogous to an exponentially decaying learning rate schedule. Further work in this direction may be promising in providing improved algorithms.
A PROOFS OF RESULTS IN SECTION 4.1
Proof of Theorem 1. The problem instance is simple. Let H = κ . . . 1
. . . , where the first d 2 diagonal entries are equal to κ and the remaining d 2 diagonal entries are equal to 1 and all the off
diagonal entries are equal to zero. Let us denote by v(i)t def = E
[( w
(i) t − (w∗)
(i) )2]
the variance in
the ith direction at time step t. Let the initialization be such that v(i)0 = σ 2/κ for i = 1, 2, ..., d/2 and v(i)0 = σ 2 for i = d/2 + 1, ..., d. This means that the variances for all directions with eigenvalue κ remain equal as t progresses and similarly for all directions with eigenvalue 1. We have
v (1) T def = E
[( w
(1) T − (w
∗) (1) )2] = T∏ j=1 (1− ηjκ)2 v(1)0 + κσ2 T∑ j=1 η2j T∏ i=j+1 (1− ηiκ)2 and
v (d) T def = E
[( w
(d) T − (w
∗) (d) )2] = T∏ j=1 (1− ηj)2 v(d)0 + σ2 T∑ j=1 η2j T∏ i=j+1 (1− ηi)2 .
We consider a recursion for v(i)t with eigenvalue λi (1 or κ). By the design of the algorithm, we know
v (i) t+1 = (1− ηtλi)2v (i) t + λiσ 2η2t .
Let s(η, λ) = λσ 2η2
1−(1−ηλ)2 be the solution to the stationary point equation x = (1 − ηλ) 2 + λσ2η2.
Intuitively if we keep using the same learning rate η, then v(i)t is going to converge to s(η, λi). Also note that s(η, λ) ≈ σ2η/2 when ηλ 1. We first prove the following claim showing that eventually the variance in direction i is going to be at least s(ηT , λi).
Claim 1. Suppose s(ηt, λi) ≤ v(i)0 , then v (i) t ≥ s(ηt, λi).
Proof. We can rewrite the recursion as
v (i) t+1 − s(ηt, λi) = (1− ηtλi)2(v (i) t − s(ηt, λi)).
In this form, it is easy to see that the iteration is a contraction towards s(ηt, λi). Further, v (i) t+1 − s(ηt, λi) and v (i) t − s(ηt, λi) have the same sign. In particular, let t0 be the first time such that s(ηt, λi) ≤ v(i)0 (note that ηt is monotone and so is s(ηt, λi)), it is easy to see that v (i) t ≥ v (i) 0 when t ≤ t0. Therefore we know v(i)t0 ≥ s(ηt0 , λi), by the recursion this implies v (i) t0+1
≥ s(ηt0 , λi) ≥ s(ηt0+1, λi). The claim then follows from a simple induction.
If s(ηT , λi) ≥ v(i)0 for i = 1 or i = d then the error is at least σ2d/2 ≥ κσ2d/T and we are done. Therefore we must have s(ηT , κ) ≤ v(1)0 = σ2/κ, and by Claim 1 we know v (1) T ≥ s(ηT , κ) ≥ σ2ηT /2. The function value is at least
E [f(wT )] ≥ d
2 · κv(1)T ≥ dκσ2ηT 4 .
To make sure E [f(wT )] ≤ dκσ 2ηT 32 we must have ηT ≤ 1
8T . Next we will show that when this happens, v(d)T must be large so the function value is still large.
We will consider two cases, in the first case, b ≥ Tα. Since 18T ≥ ηT = a b+Tα ≥ a 2b , we have a b ≤ 1 4T . Therefore v (d) T ≥ (1 − a b ) 2T v (d) 0 ≥ σ2/2, so the function value is at least E [f(wt)] ≥ d 2v (d) T ≥ dσ2 4 ≥ κdσ2 T , and we are done.
In the second case, b < Tα. Since 18T ≥ ηT = a b+Tα ≥ a 2Tα , we have a ≤ 0.25T α−1. The sum of learning rates satisfy
T∑ i=1 ηi ≤ T∑ i=1 a iα ≤ T∑ i=1 0.25i−1 ≈ 0.25 log T.
Here the second inequality uses the fact that Tα−1i−α ≤ i−1 when i ≤ T . Similarly, we also know∑T i=1 η 2 i ≤ ∑T i=1 0.25i
−2 ≤ π2/24. Using the approximation (1 − η)2 ≥ exp(−2η − 4η2) for η < 1/4, we get v(d)T ≥ exp(−2 ∑T i=1 ηi − 4 ∑T i=1 η 2 i )v (d) 0 ≥ σ2/5 √ T , so the function value is at least E [f(wt)] ≥ d2v (d) T ≥ dσ2 20 √ T ≥ κdσ 2
32T . This concludes the second case and proves the theorem.
B PROOFS OF RESULTS IN SECTION 4.2
Proof of Theorem 2. The learning rate scheme is as follows. Divide the total time horizon into log(κ) equal sized phases. In the `th phase, the learning rate to be used is log T ·log κ
2`·µ·T . Note that the learning rate in the first phase depends on strong convexity and that in the last phase depends on smoothness (since the last phase has ` = log κ). Recall the variance in the kth coordinate can be upper bounded by
v (k) T def = E
[( w
(k) T − (w
∗) (1) )2] ≤ T∏ j=1 ( 1− ηjλ(k) )2 v (1) 0 + λ (k)σ2 T∑ j=1 η2j T∏ i=j+1 ( 1− ηiλ(k) )2
≤ exp −2 T∑ j=1 ηjλ (k) v(1)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) . We will show that for every k, we have
v (k) T ≤ log κ · log T λ(k)T · σ2,
which directly implies the theorem. Now choose any k. Let `∗ denote the number satisfying 2` ∗ ·µ ≤ λ(k) < 2` ∗+1 · µ. Note that `∗ depends on k but we suppressed the dependence for notational simplicity.
v (k) T ≤ exp −2 T∑ j=1 ηjλ (k) v(k)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) ≤ exp ( −2 log T log κ
T · λ(k) · T log κ · 1 2`∗µ
)( v
(k) 0 + λ (k)σ2 · T log κ · `∗−1∑ `=1 η2`
)
+ λ(k)σ2 ( η2`∗ ·
1 1− exp ( −η`∗λ(k) ) + log κ∑ `=`∗+1 T log κ · ( log κ log T 2`µT )2)
≤ v (k) 0
T 3 + κσ2 · log
2 κ log2 T
µT 3 + η`∗σ
2 + σ2 · log κ log 2 T
T
log κ∑ `=`∗+1 1 2`µ
≤ v (k) 0
T 3 + σ2 · log κ log
2 T
T
( λ(k) + 1
2`∗µ ) ≤ v (k) 0
T 3 + 2 log κ · log2 T λ(k)T · σ2.
This finishes the proof.
Proof of Theorem 3. The learning rate scheme we consider is γt = γ0 · ct−1 with γ0 = log T/(µTe), Te = T/ log κ and c = (1 − 1/Te). Further, just as in previous lemmas, we consider a specific eigen direction λ(k) and write out the progress made along this direction by iteration say, T̂ ≤ T :
err(k) T̂ = T̂∏ t=1 (1− γtλ(k))2err(k)0 + (λ(k))2σ2 T̂∑ τ=1 γ2τ · T̂∏ t=τ+1 (1− γtλ(k))2
≤ exp −2λ(k)γ0 T̂∑ t=1 ct−1 err(k)0 + (λ(k))2σ2γ20c2 T̂∑ τ=1 c2τ exp −2λ(k)γ0 T̂∑ t=τ+1 ct−1 = exp ( −2λ
(k)γ0 1− c
· (1− cT̂ ) ) err(k)0 + (λ(k))2σ2γ20
c2
T̂∑ τ=1 c2τ exp ( −2λ (k)γ0 1− c · (cτ − cT̂ ) )
= exp ( 2λ(k)γ0 1− c · cT̂ ) · exp(−2λ(k)γ0 1− c ) err(k)0 + (λ(k))2σ2γ20 c2 T̂∑ τ=1 c2τ exp ( −2λ (k)γ0 1− c · (cτ ) )
Represent cτ = x and 2λ(k)γ0/(1 − c) = α. Substituting necessary values of the quantities, we have α = 2 log T · λ(k)/µ ≥ 2. Now, the second term is upper bounded using the corresponding integral, which is the following:
cT̂∑ x=c x2 exp (−αx) ≤ ∫ cT̂ 1 x2 exp (−αx)dx ≤ 1 α · ( 1 + 2 α + 2 α2 ) exp (−α).
Substituting this in the previous bound, we have:
err(k) T̂ ≤ exp
( −α(1− cT̂ ) ) · err(k)0 + (λ(k))2σ2γ20αc2 · ( 1 + √ 2 α )2 ≤ exp ( −α(1− cT̂ ) ) · ( err(k)0 + 16 (λ(k))2σ2γ20
α ) ≤ exp ( −α(1− cT̂ ) ) · ( err(k)0 + 16 (λ(k))2σ2 log T
µ2T 2e · 2(λ(k)/µ) ) = exp ( −α(1− cT̂ ) ) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e ) Now, setting T̂ = Te log(Cλ(k)/µ), and using 1 − a ≤ exp (−a), with C > 1 being some (large) universal constant, we have:
err(k) T̂ ≤ exp
( −2(λ(k)/µ) log T · (1− µ Cλ(k) ) ) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e ) ≤ 1 T 2−(1/C) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e
) (4)
Now, in order to argue the progress of the algorithm from T̂ + 1 to T , we can literally view the algorithm as starting with the iterate obtained from running for the first T̂ steps (thus satisfying the excess risk guarantee in equation 4) and then adding in variance by running for the duration of time between T̂ + 1 to T . For this part, we basically upper bound this behavior by first assuming that there is no contraction of the bias and then consider the variance introduced by running the algorithm from T̂ + 1 to T . This can be written as:
err(k)T ≤ T∏
t=T̂+1
(1− γtλ(k))2err(k)T̂ + (λ (k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2
≤ err(k) T̂ + (λ(k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2
Now, to consider bounding the variance of the process with decreasing sequence of learning rates, we will instead work with a constant learning rate and understand its variance:
(λ(k))2σ2 T−T̂∑ τ=1 γ2(1− γλ(k))2(T−T̂−τ) ≤ γ 2σ2(λ(k))2 (2− λ(k)γ)γλ(k)
≤ γσ2λ(k). What this implies in particular is that the variance is a monotonic function of the learning rate and thus the overall variance can be bounded using the variance of the process run with a learning rate of γT̂ . (λ(k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2 ≤ (λ(k))2σ2 T−T̂∑ t=τ+1 γ2 T̂ (1− γT̂λ (k))2 ≤ γT̂σ 2λ(k)
≤ log T µTe · µ Cλ(k) · σ2λ(k) ≤ σ 2 log T log κ
T Plugging this into equation 4 and summing over all directions, we have the desired result.
Proof of Theorem 4. The learning rate scheme is as follows.
We first break T into three equal sized parts. Let A = T/3 and B = 2T/3. In the first T/3 steps, we use a constant learning rate of 1/L. In the second T/3 steps, we use a polynomial decay learning rate ηA+t = 1µ(κ+t/2) . In the third T/3 steps, we break the steps into log2(κ) equal sized phases. In the `th phase, the learning rate to be used is 5 log2 κ 2`·µ·T . Note that the learning rate in the first phase depends on strong convexity and that in the last phase depends on smoothness (since the last phase has ` = log κ).
Recall the variance in the kth coordinate can be upper bounded by
v (k) T def = E
[( w
(k) T − (w
∗) (1) )2] ≤ T∏ j=1 ( 1− ηjλ(k) )2 v (1) 0 + λ (k)σ2 T∑ j=1 η2j T∏ i=j+1 ( 1− ηiλ(k) )2
≤ exp −2 T∑ j=1 ηjλ (k) v(1)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) . We will show that for every k, we have
v (k) T ≤
v (k) 0
T 3 +
50 log2 κ
λ(k)T · σ2., (5)
which directly implies the theorem.
We will consider the first T/3 steps. The guarantee that we will prove for these iterations is: for any t ≤ A, v(k)t ≤ (1− λ(k)/L)2tv (k) 0 + σ2 L .
This can be proved easily by induction. Clearly this is true when t = 0. Suppose it is true for t− 1, let’s consider step t. By recursion of v(k)t we know
v (k) t = (1− λ(k)/L)2v (k) t−1 + λ (k)σ2/L2
≤ (1− λ(k)/L)2tv(k)0 + σ2
L
( (1− λ(k)/L)2 + λ(k)/L ) ≤ (1− λ(k)/L)2tv(k)0 + σ2
L .
Here the second step uses induction hypothesis and the third step uses the fact that (1−x)2 +x ≤ 1 when x ∈ [0, 1]. In particular, since (1−λ(k)/L)2T/3 ≤ (1−1/κ)2T/3 ≤ (1−1/κ)3κ log T = 1/T 3, we know at the end of the first phase, v(k)A ≤ v (k) 0 /T 3 + σ 2 L .
In the second T/3 steps, the guarantee would be: for any t ≤ T/3, v(k)A+t ≤ v (k) 0 /T 3 + 2ηA+tσ 2.
We will again prove this by induction. The base case (t = 0) follows immediately from the guarantee for the first part. Suppose this is true for A + t − 1, let us consider A + t, again by recursion we know
v (k) A+t = (1− λ (k)ηA+t−1) 2v (k) A+t−1 + λ (k)σ2η2A+t−1 ≤ v(k)0 /T 3 + 2ηA+t−1σ2 ( (1− λ(k)ηA+t−1)2 + 1
2 λ(k)ηA+t−1 ) ≤ v(k)0 /T 3 + 2ηA+t−1σ2(1− 1
2 µηA+t−1) ≤ v(k)0 /T 3 + 2ηA+tσ2.
Here the last line uses the fact that 2ηA+t−1(1− 12µηA+t−1) ≤ 2ηA+tσ 2, which is easy to verify by our choice of η. Therefore, at the end of the second part, we have v(k)B ≤ v (k) 0 /T 3 + 2σ 2 µ(κ+T/6) .
Finally we will analyze the third part. Let T̂ = T/3 log2 κ, we will consider the variance v (k)
B+`T̂ at
the end of each phase. We will make the following claim by induction:
Claim 2. Suppose 2` · µ ≤ λ(k), then
v (k) B+`T̂ ≤ v(k)B exp(−3`) + 2T̂ η 2 `λ (k)σ2.
Proof. We will prove this by induction. When ` = 0, clearly we have v(k)B ≤ v (k) B so the claim is true. Suppose the claim is true for `− 1, we will consider what happens after the algorithm uses η` for T̂ steps. By the recursion of the variance we have
v (k) `T̂ ≤ v(k) (`−1)T̂ · exp(−2η` · λ(k)T̂ ) + T̂ η2`λ(k)σ2.
Since 2` · µ ≤ λ(k), we know exp(−2η` · λ(k)T̂ ) ≤ exp(−3). Therefore by induction hypothesis we have
v (k) B+`T̂ ≤ v(k)B exp(−3`) + exp(−3) · 2T̂ η 2 `−1λ (k) + T̂ η2`λ (k) ≤ v(k)B exp(−3`) + 2T̂ η 2 `λ (k).
This finishes the induction.
By Claim 2, Let `∗ denote the number satisfying 2` ∗ ·µ ≤ λ(k) < 2`∗+1 ·µ, by this choice we know µ/λ(k) ≥ 12 exp(−3` ?) we have
v (k) T ≤ v (k) B+`∗T̂ ≤ v(k)B exp(−3` ∗) + 2T̂ η2`∗λ (k)σ2
≤ v (k) 0
T 3 +
24σ2 λ(k)T + 50 log2 κ 3λ(k)T · σ2.
≤ v (k) 0
T 3 +
50 log2 κ
λ(k)T · σ2.
Therefore, the function value is bounded by E [f(wT )] = ∑d i=1 λ (k)v (k) T ≤ f(w0) T 3 + 50 log2 κ T · σ2d.
C PROOFS OF RESULTS IN SECTION 5
All of our counter-examples in this section are going to be the same simple function. Let H be a diagonal matrix with d/2 eigenvalues equal to κ and the other d/2 eigenvalues equal to 1. Intuitively, we will show that in order to have a small error in the first eigendirection (with eigenvalue κ), one need to set a small learning rate ηt which would be too small to achieve a small error in the second
eigendirection (with eigenvalue 1). As a useful tool, we will decompose the variance in the two directions corresponding to κ eigenvalue and 1 eigenvalue respectively as follows:
v (1) T def = E
[( w
(1) T − (w
∗) (1) )2] = T∏ j=1 (1− ηjκ)2 v(1)0 + κσ2 T∑ j=1 η2j T∏ i=j+1 (1− ηiκ)2
≥ exp −2 T∑ j=1 ηjκ v(1)0 + κσ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiκ and (6) v
(2) T def = E
[( w
(2) T − (w
∗) (2) )2] = T∏ j=1 (1− ηj)2 v(2)0 + σ2 T∑ j=1 η2j T∏ i=j+1 (1− ηi)2
≥ exp −2 T∑ j=1 ηj v(2)0 + σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηi . (7) Proof of Theorem 5. Fix τ = κ/C log(κ+ 1) where C is a universal constant that we choose later. We need to exhibit that the lim sup is larger than τ . For simplicity we will also round κ up to the nearest integer.
Let T be a given number. Our goal is to exhibit a T̃ > T such that flin(wT̃ ) (σ2/T̃) ≥ τ . Given the step size sequence ηt, consider the sequence of numbers T0 = T, T1, · · · , Tκ such that Ti is the first number that
1 κ ≤ Ti∑ t=Ti−1+1 ηt ≤ 3 κ .
Note that such a number always exists because all the step sizes are at most 2/κ. We will also let ∆i be Ti − Ti−1. Firstly, from (6) and (7), we see that ∑ t ηt = ∞. Otherwise, the bias will never decay to zero. If f(wTi−1+∆i) > τσ2d Ti−1+∆i for some i = 1, · · · , κ, we are done. If not, we obtain the following relations:
σ2 ∆1 ≤ σ2 ∆1∑ t=1 η2T0+t ≤ exp(3) κ · E [( w (1) T0+∆1 − (w∗)(1) )2] ≤ exp(3)flin(wT0+∆1) ≤ exp(3)τσ2 T0 + ∆1
⇒ T0 ≤ (exp(3)τ − 1) ∆1. Here the second inequality is based on (6). We will use C1 to denote exp(3). Similarly, we have
σ2 ∆2 ≤ σ2 ∆2∑ t=1 η2T1+t ≤ C1 κ E [( w (1) T1+∆2 − (w∗)(1) )2] ≤ C1flin(wT1+∆2) ≤ C1τσ 2 T1 + ∆2
⇒ T1 ≤ (C1τ − 1) ∆2 ⇒ T0 ≤ (C1τ − 1)2
C1τ ∆2.
Repeating this argument, we can show that
T = T0 ≤ (C1τ − 1)i
(C1τ) i−1 ∆i and Ti ≤
(C1τ − 1)j−i
(C1τ) j−i−1 ∆j ∀ i < j.
We will use i = 1 in particular, which specializes to
T1 ≤ (C1τ − 1)j−1
(C1τ) j−2 ∆j ∀ j ≥ 2.
Using the above inequality, we can lower bound the sum of ∆j as κ∑ j=2 ∆j ≥ T1 · κ∑ j=2 (C1τ) j−2 (C1τ − 1)j−1 ≥ T1 · 1 C1τ · κ∑ j=2 ( 1 + 1 C1τ )j−2 ≥ T1 · 1
C1τ · exp (κ/(C1τ)) . (8)
This means that
E [f(wTi)] ≥ d 2 · E [( w (2) Ti − (w∗)(2) )2] ≥ exp(−6)σ2d · ∆1∑ i=1 η2T+i
≥ exp(−6)σ 2d ∆1 ≥ exp(−6)σ 2d T1 ≥ exp (κ/(C1τ)− 3) C1τ · σ 2d∑κ j=2 ∆j ,
where we used (8) in the last step. Rearranging, we obtain
E [f(wTκ)] (σ2d/Tκ) ≥ exp (κ/(C1τ)− 3) C1τ .
If we choose a large enough C (e.g., 3C1), the right hand side is at least exp((C/C1) log(κ+1)−3) κ ≥ κ.
To prove Theorem 6, we rely on the following key lemma, which says if a query point wT is bad (in the sense that it has expected value more than 10τσ2d/T ), then it takes at least Ω(T/τ) steps to bring the error back down. Lemma 8. There exists universal constantsC1, C2 > 0 such that for any τ ≤ κCC1 log(κ+1) whereC is the constant in Theorem 5, suppose at step T , the query point wT satisfies f(wT ) ≥ C1τσ2d/T , then for all T̃ ∈ [T, (1 + 1C2τ )T ] we have E [f(wT̃ )] ≥ τσ 2d/T ≥ τσ2d/T̃ .
Proof of Lemma 8. Since f(wT ) ≥ C1τσ2d/T and f(wT ) = d 2 ( κ ( w (1) T − (w∗) (1) )2 + ( w (2) T − (w∗) (2) )2) , we know either ( w (1) T − (w∗) (1) )2 ≥
C1τσ 2/2κT or ( w (2) T − (w∗) (2) )2 ≥ C1τσ2/2T . Either way, we have a coordinate i with
eigenvalue λi (κ or 1) such that ( w (i) T − (w∗) (i) )2 ≥ C1τσ2/(2Tλi).
Similar as before, choose ∆ to be the first point such that
ηT+1 + ηT+2 + · · ·+ ηT+∆ ∈ [1/λi, 3/λi]. First, by (6) or (7), we know for any T ≤ T̃ ≤ T + ∆, E [( w (i)
T̃ − (w∗)(i)
)2] ≥
exp(−6)C1τσ2/(2λiT ) just by the first term. When we choose C1 to be large enough the contribution to function value by this direction alone is larger than τσ2/T . Therefore every query in [T, T + ∆] is still bad.
We will consider two cases based on the value of S2 := ∑T+∆ T̃=T+1 η2 T̃ .
If S2 ≤ C2τ/(λ2iT ) (where C2 is a large enough universal constant chosen later), then by CauchySchwartz we know
S2 ·∆ ≥ ( T+∆∑ T̃=T+1 ηT̃ ) 2 ≥ 1/λ2i .
Therefore ∆ ≥ T/C2τ , and we are done. If S2 > C2τ/(λ2iT ), by Equation (6) and (7) we know
E [(
w (i) T+∆ − (w
∗) (i) )2] ≥ σ2 T+∆∑ T̃=T+1 η2 T̃ exp −2λi T+∆∑ j=T̃+1 ηj ≥ exp(−6)σ2
T+∆∑ T̃=T+1 η2 T̃ ≥ exp(−6) · C2τσ2/(λ2iT ).
Here the first inequality just uses the second term in Equation (6) or (7), the second inequality is because ∑T+∆ j=T̃+1 ηj ≤ ∑T+∆ j=T+1 ηj ≤ 3/λi and the last inequality is just based on the value of S2. In this case as we can see as long as C2 is large enough, T + ∆ is also a point with E [f(wT+∆)] ≥
λiE [( w (i) T+∆ − (w∗) (i) )2] ≥ C1τσ2/(T + ∆), so we can repeat the argument there. Eventually
we either stop because we hit case 1: S2 ≤ C2τ/λ2iT or the case 2 S2 > C2τ/λ2iT happened more than T/C2τ times. In either case we know for any T̃ ∈ [T, (1 + 1/C2)T ] E [f(wT̃ )] ≥ τσ2/T ≥ τσ2/T̃ as the lemma claimed.
Theorem 6 is an immediate corollary of Theorem 5 and Lemma 8.
Proof of Theorem 7. This result follows by running the constant and cut scheme for a fixed time horizon T and then increasing the time horizon to κ ·T . The learning rate of the initial phase for the new T ′ = κ · T is 1/µT ′ = 1/µ · κT = 1/LT which is the final learning rate for time horizon T . Theorem 2 will then directly imply the current theorem.
D DETAILS OF EXPERIMENTAL SETUP
D.1 SYNTHETIC 2-D QUADRATIC EXPERIMENTS
As mentioned in the main paper, we consider three condition numbers namely κ ∈ {50, 100, 200}. We run all experiments for a total of 200κ iterations. The two eigenvalues of the Hessian are κ and 1 respectively, and noise level is σ2 = 1 and we average our results with two random seeds. All our grid search results are conducted on a 10× 10 grid of learning rates × decay factor and whenever a best run lands at the edge of the grid, the grid is extended so that we have the best run in the interior of the gridsearch.
For the O(1/t) learning rate, we search for decay parameter over 10−points logarithmically spaced between {1/(100κ), 3000/κ}. The starting learning rate is searched over 10 points logarithmically spaced between {1/(20κ), 1000/κ}.
For the O(1/ √
(t)) learning rate, the decay parameter is searched over 10 logarithmically spaced points between {100/κ, 200000.0/κ}. The starting learning rate is searched between {0.01, 2}. For the exponential learning rate schemes, the decay parameter is searched between {exp (−2/N), exp (−106/N)}. The learning rate is searched between {1/5000, 1/10}.
D.2 NON-CONVEX EXPERIMENTS ON CIFAR-10 DATASET WITH A 44-LAYER RESIDUAL NET
As mentioned in the main paper, for all the experiments, we use the Nesterov’s Accelerated gradient method (Nesterov, 1983) implemented in pytorch 12 with a momentum set to 0.9 and batchsize set to 128, total number of training epochs set to 100, `2 regularization set to 0.0005.
With regards to learning rates, we consider 10−values geometrically spaced as {1, 0.6, · · · , 0.01}. To set the decay factor for any of the schemes such as 1,2, and 3, we use the following rule. Suppose we have a desired learning rate that we wish to use towards the end of the optimization (say, something that is 100 times lower than the starting learning rate, which is a reasonable estimate of what is typically employed in practice), this can be used to obtain a decay factor for the corresponding decay scheme. In our case, we found it advantageous to use an additively spaced grid for the learning rate γt, i.e., one which is searched over a range {0.0001, 0.0002, · · · , 0.0009, 0.001, · · · , 0.009} at the 80th epoch, and cap off the minimum possible learning rate to be used to be 0.0001 to ensure that there is progress made by the optimization routine. For any of the experiments that yield the best performing gridsearch parameter that falls at the edge of the grid, we extend the grid to ensure that the finally chosen hyperparameter lies in the interior of the grid. All our gridsearches are run such that we separate a tenth of the training dataset as a validation set and train on the remaining 9/10th dataset. Once the best grid search parameter is chosen, we train on the entire training dataset and
12https://github.com/pytorch
evaluate on the test dataset and present the result of the final model (instead of choosing the best possible model found during the course of optimization). | 1. What is the main contribution of the paper regarding learning-rate schedules for stochastic optimization?
2. How does the proposed learning rate schedule compare to other approaches in terms of convergence rates?
3. What are some concerns about the presentation of the results, particularly regarding the connection to deep learning applications?
4. How does the paper misrepresent the stochastic optimization literature, specifically regarding the use of polynomially decaying stepsizes?
5. Are there any technical limitations to the results, such as assuming a simple additive-noise model for gradients?
6. How could the authors improve their presentation to focus solely on discussing the actual results without attempting to draw disproportionate conclusions from them? | Review | Review
The paper studies the effect of learning-rate choices for stochastic optimization, focusing on least-mean-squares with decaying stepsizes. The main result is showing that exponentially decaying stepsizes can yield improved rates of convergence of the final iterate in terms of dependence on the condition number. The proposed learning rate schedule depends on the condition number and the number of iterations. This positive result is complemented by showing that without prior knowledge of the time horizon, any stepsize sequence will frequently yield suboptimal solutions.
I have mixed feelings about the paper. On the positive side, the particular observation that exponential learning-rate schedules lead to faster convergence for SGD in linear least-squares problems indeed seems to be a novel result, and the lower bound also appears to be new and interesting. The analysis seems to be technically correct as well.
On the other hand, I have several concerns about the presentation of the results:
- The abstract and the introduction sets up a misleading narrative around the results: the authors seem to suggest that their work somehow explains why certain learning-rate schedules work better than others for deep learning applications / non-convex optimization, although the actual results exclusively concern the classical problem of linear least-squares regression. This presentation is completely uncalled for as the authors themselves admit that it is unclear how the results would generalize to other convex optimization settings, let alone non-convex optimization. Also, I think that this presentation style is rather harmful as it suggests that learning-theory results concerning classical setups are somehow embarrassing, so they need to be sold through some made-up connections to trendy topics in deep learning. I would suggest that the authors completely "rethink" the presentation of the paper and write it in a style that is consistent with the actual results: as a learning theory paper, without the irrelevant deep learning experiments (that only show well-known phenomena anyway).
- The paper misrepresents a large body of work on stochastic/online optimization. Specifically, the authors suggest that the stochastic optimization literature exclusively suggests the use of polynomially decaying stepsizes. This picture is grossly inaccurate for multiple reasons:
*** It has been known for a while that the de facto optimal tuning of SGD for least squares involves a large constant stepsize and iterate averaging (see, e.g., Bach and Moulines, NIPS 2013). This approach is only mentioned in passing without any discussion, even though it yields convergence rates that do not involve *any* dependence on the condition number in the leading term---thus achieving a much more significant improvement than the learning-rate schedule studied in this paper. In light of these results, learning-rate schedules are already being "re-thought" as we speak, and studying the behavior of the last iterate has received less attention in the past couple of years. If anything, the present paper only provides further evidence (through the negative result) that the individual iterates are ill-behaved in general and it is better to average the iterates instead. I would consider this negative result as an interesting addition to the stochastic-optimization literature, had it been presented in a completely different narrative (e.g., augmenting the discussion in "Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes" by Shamir and Zhang, 2013).
*** Exponentially decaying (or "constant-and-cut", as they are called here) schedules have actually been studied before in the paper "Beyond the Regret Minimization Barrier: Optimal Algorithms for Stochastic Strongly-Convex Optimization" by Hazan and Kale (JMLR 2014). This significantly weakens the main intended selling point of the paper which was being the first-ever study of such learning-rate schedules. The results in said paper are of a somewhat different nature, but they have arguably as little to do with deep learning as the results of the present paper has. Notably, both the present paper and the cited work rely on *strong convexity* of the objective (through assuming prior knowledge of the condition number), so I would expect that none of these results would explain anything in the context of deep learning.
On the technical side, the proofs appear to be correct but presented somewhat sloppily, with most of the notation appearing without proper definitions. For instance, the proof of Theorem 2 seems to import notation from the proof of Theorem 1, although without explicitly mentioning that the covariance matrix is assumed to be diagonal(ized). The proof of Theorem 3 then seems to again replace this previously (non-)established notation by another one (e.g., v becomes err and \eta becomes \gamma). The proofs also involve long sequences of inequalities without explanation, and only bound the variances (w_k-w^*_k)^2 without mentioning how this quantity is related to the excess risk. (The relation is well-known but not obvious at all for first-time readers of such proofs.)
One technical limitation of the results is that they assume a simple additive-noise model for the gradients, which the authors conveniently call "fairly natural" and incorrectly claim to hold for linear regression with well-specified models (footnote 8). In reality, the gradient noise in this setting also depends on the current iterate w_t, which makes analysis significantly harder. (To see the difference, just compare the complexity of the proofs of Lemma 1 and Theorem 2 that correspond to these different settings in "Harder, Better, Faster, Stronger Convergence Rates for Least-Squares Regression" by Dieleveut, Flammarion and Bach, 2017.)
Overall, I don't think that this paper is fit for publication in its present form. Once again, I would suggest that in a future version, the authors focus solely on discussing the actual results without attempting to draw disproportionate conclusions from them.
Detailed comments
=================
- pp.1, abstract: the first half of the abstract is completely irrelevant to the rest of the paper, so I'd suggest removing it.
- pp.1, "learning-rate schedules for SGD is a rather enigmatic topic"---"enigmatic" feels like a bit of a strong adjective here, given that there are many aspects of learning-rate tuning that are actually pretty well-understood.
- pp.2: The second paragraph on page 2 is again irrelevant to the actual technical content of the paper.
- pp.2, "all the works in stochastic approximation try to bound the error of each iterate of SGD"---This is simply not true, given the growing literature concerning the behavior of the *averaged iterates*.
- pp.4, first display: poor typesetting.
- pp.6, Eqs. 1--3: ditto.
- pp.8, last paragraph: Singling out the particular setting of gradient-norm minimization feels arbitrary and poorly justified.
- pp.11: the first and second displays should be switched for better readability (otherwise the first one comes without explanation). Also note that this form is not just due to the algorithm design, but also to the simplified noise model.
- pp.12, App B:
*** It appears that you forgot to mention here that you're working in the coordinate system induced by the eigenvectors, and also forgot to define the eigenvalues, etc.
*** The indices (1) and (k) are incoherent in the first display.
*** Although you promise you'll prove the inequality in the second display, you eventually prove something else.
*** It is not very clear on first sight that \ell^* actually exists and falls within the scope of \ell---you should explain that it exists due to the choice of the number of phases. (Which, by the way, should be rounded up to allow this property?)
*** The sequence of inequalities in the last display seems correct but unnecessarily hard to verify due to the lack of explanations. |
ICLR | Title
Rethinking learning rate schedules for stochastic optimization
Abstract
There is a stark disparity between the learning rate schedules used in the practice of large scale machine learning and what are considered admissible learning rate schedules prescribed in the theory of stochastic approximation. Recent results, such as in the ’super-convergence’ methods which use oscillating learning rates, serve to emphasize this point even more. One plausible explanation is that non-convex neural network training procedures are better suited to the use of fundamentally different learning rate schedules, such as the “cut the learning rate every constant number of epochs” method (which more closely resembles an exponentially decaying learning rate schedule); note that this widely used schedule is in stark contrast to the polynomial decay schemes prescribed in the stochastic approximation literature, which are indeed shown to be (worst case) optimal for classes of convex optimization problems. The main contribution of this work shows that the picture is far more nuanced, where we do not even need to move to non-convex optimization to show other learning rate schemes can be far more effective. In fact, even for the simple case of stochastic linear regression with a fixed time horizon, the rate achieved by any polynomial decay scheme is suboptimal compared to the statistical minimax rate (by a factor of condition number); in contrast the “cut the learning rate every constant number of epochs” provides an exponential improvement (depending only logarithmically on the condition number) compared to any polynomial decay scheme. Finally, it is important to ask if our theoretical insights are somehow fundamentally tied to quadratic loss minimization (where we have circumvented minimax lower bounds for more general convex optimization problems)? Here, we conjecture that recent results which make the gradient norm small at a near optimal rate, for both convex and non-convex optimization, may also provide more insights into learning rate schedules used in practice.
1 INTRODUCTION
The recent advances in machine learning and deep learning rely almost exclusively on stochastic optimization methods, primarily SGD and its variants. Here, these large scale stochastic optimization methods are manually (and often painstakingly) tuned to the problem at hand (often with parallelized hyper-parameter searches), where there is, as of yet, no class of “universal methods” which uniformly work well on a wide range of problems with little to no hyper-parameter tuning. This is in stark contrast to non-stochastic numerical optimization methods, where it is not an overstatement to argue that the l-BFGS and non-linear conjugate gradient methods (with no hyper-parameter tuning whatsoever) have provided nearly unbeatable procedures (for a number of decades) on nearly every unconstrained convex and non-convex problem. In the land of stochastic optimization, there are two dominant (and somewhat compatible approaches): those methods which often manually tune learning rate schedules to achieve the best performance (Krizhevsky et al., 2012; Sutskever et al., 2013; Kingma & Ba, 2014; Kidambi et al., 2018) and those methods which rely on various forms of approximate preconditioning (Duchi et al., 2011; Tieleman & Hinton, 2012; Kingma & Ba, 2014). This works examines the former class of methods, where we seek a more refined understanding of the issues of learning rate scheduling, through both theoretical analysis and empirical studies.
Learning rate schedules for SGD is a rather enigmatic topic since there is a stark disparity between what is considered admissible in theory and what is employed in practice to achieve the best re-
sults. Let us elaborate on this distinction more clearly. In theory, a vast majority of works starting with Robbins & Monro (1951); Polyak & Juditsky (1992) consider learning rates that have the form of ηt = ab+tα for some a, b ≥ 0 and 1/2 < α ≤ 1 – we call these polynomial decay schemes. The key property enjoyed by these polynomial decay schemes is that they are not summable but are square summable. A number of works obtain bounds on the asymptotic convergence rates of such schemes. Note that the focus of these works is to design learning rate schemes that work well for all large values of t. In contrast, practitioners are interested in achieving the best performance given a computational budget or equivalently a fixed time horizon T e.g., 100 passes on training dataset with a batch size of 128.
The corresponding practically best performing learning rate scheme is often one where the step size is cut by a constant factor once every few epochs, or, equivalently, when no progress is made on a validation set (Krizhevsky et al., 2012; He et al., 2016b) (often called a dev set based decay scheme). Such schemes are widely popular to the extent that they are available as schemes in deep learning libraries such as PyTorch 1 and several such useful tools of the trade are taught on popular deep learning courses 2. Furthermore, what is (often) puzzling (from a theory perspective) is the emphasis that is laid on “babysitting” the learning rates 3 to achieve the best performance. Why do practitioners use constant and cut learning rate schemes while most of the theory work routinely works with polynomial decaying schemes? Of course, implicit to this question is the view that both of these schemes are not equivalent. Indeed if both of these were equivalent, one could parameterize the learning rate as ab+tα and do hyperparameter search over a, b and α. In practice, this simply does not give results comparable to the constant and cut schemes.4 One potential explanation for this could be that, in the context of neural network training, local minima found by constant and cut schemes are of much better quality than those found by polynomial decay schemes, while for convex problems, polynomial decay schemes are indeed optimal.
The primary contribution of this work is to show that this is simply not the case. We concretely show how minimax optimal theoretical learning rates (i.e. polynomial decay schemes for wide classes of convex optimization problems) may be misleading (and sub-optimal for locally quadratic problems), and the story in practice is more nuanced. There important issues at play with regards to this suboptimality. First, even for the simple case of stochastic linear regression, with a fixed time horizon, the rate achieved by any polynomial decay scheme (i.e., any choice of a, b and α) is suboptimal compared to the statistical minimax rate (i.e., information theoretically best possible rate achievable by any algorithm) by a factor of condition number κ (see Section 3 for definitions), while there exist constant and cut schemes that are suboptimal only by a factor of log κ.
Second, this work shows that a factor of κ suboptimality is unavoidable if we wish to bound the error of each iterate of SGD. In other words, we show that the convergence rate of lim sup of the error, as t→∞, has to be necessarily suboptimal by a factor of Ω̃(κ) compared to the statistical minimax rate, for any learning rate sequence (polynomial or not). In fact, at least Ω̃1/κ fraction of the iterates have this suboptimality. With this result, things become quite clear – all the works in stochastic approximation try to bound the error of each iterate of SGD asymptotically (or lim sup of the error in other words). Since this necessarily has to be suboptimal by a factor of Ω̃(κ) compared to the statistical minimax rates, the suboptimality of polynomial decay rates is not an issue. However, with a fixed time horizon, there exist learning rate schemes with much better convergence rates, while polynomial decay schemes fail to get better rates in this simpler setting (of known time horizon).
Thirdly, the work shows that, for stochastic linear regression, if we consider lim inf (rather than lim sup) of the error, it is possible to design schemes that are suboptimal by only a factor of log κ compared to the minimax rates. Variants of the constant and cut schemes achieve this guarantee.
In summary, the contributions of this paper are showing how widely used pratical learning rate schedules are, in fact, highly effective even in the convex case. In particular, our theory and empirical results demonstrate this showing that:
1https://pytorch.org/docs/stable/optim.html#torch.optim.lr_scheduler. ReduceLROnPlateau
2http://cs231n.github.io/ 3http://cs231n.github.io/neural-networks-3/ 4In fact, this work shows an instance where there is a significant (provable) difference between the perfor-
mance of these two schemes.
• For a fixed time horizon, constant and cut schemes are provably, significantly better than polynomial decay schemes. • There is a fundamental difference between fixed time horizon and infinite time horizon. • The above difference can be mitigated by considering lim inf of error instead of lim sup. • In addition to our theoretical contributions, we empirically verify the above claims for
neural network training on cifar-10.
Extending results on the performance of constant and cut schemes to more general convex optimization problems, beyond stochastic linear regression, is an important future direction. However, the fact that the suboptimality of polynomial decay schemes even for the simple case of stochastic linear regression, has not been realized after decades of research on stochastic approximation is striking.
In summary, the results of this paper show that, even for stochastic linear regression, the popular in practice, constant and cut learning rate schedules are provably better than polynomial decay schemes popular in theory and that there is a need to rethink learning rate schemes and convergence guarantees for stochastic approximation. Our results also suggest that current approaches to hyperparameter tuning of learning rate schedules might not be right headed and further suggest potential ways of improving them.
Paper organization: The paper is organized as follows. We review related work in Section 2. Section 3 describes the notation and problem setup. Section 4 presents our results on the suboptimality of both polynomial decay schemes and constant and cut schemes. Section 5 presents results on infinite horizon setting. Section 6 presents experimental results and Section 7 concludes the paper.
2 RELATED WORK
We will split related work into two parts, one based on theory and the other based on practice.
Related efforts in theory: SGD and the problem of stochastic approximation was introduced in the seminal work of Robbins & Monro (1951); this work also elaborates on stepsize schemes that are satisfied by asymptotically convergent stochastic gradient methods: we refer to these schemes as “convergent” stepsize sequences. The (asymptotic) statistical optimality of iterate averaged SGD with larger stepsize schemes ofO(1/nα) with α ∈ (0.5, 1) was proven in the seminal works of Ruppert (1988); Polyak & Juditsky (1992). The notions of convergent learning rate schemes in stochastic approximation literature has been studied in great detail (Ljung et al., 1992; Kushner & Yin, 2003; Bharath & Borkar, 1999; Lai, 2003). Nearly all of the aforementioned works rely on function value sub-optimality to measure convergence and rely on the notion of asymptotic convergence (i.e. in the limit of the number of updates of SGD tending to infinity) to derive related “convergent stepsize schedules”. Along this line of thought, there are several efforts that prove (minimax) optimality of the aforementioned rates (in a worst case sense and not per problem sense) e.g., Nemirovsky & Yudin (1983); Raginsky & Rakhlin (2011); Agarwal et al. (2012).
An alternative viewpoint is to consider gradient norm as a means to measure the progress of an algorithm. Along this line of thought are several works including the stochastic process viewpoint considered by Polyak & Juditsky (1992) and more recently, the work of Nesterov (2012) (working with deterministic (exact) gradients). The work of Allen-Zhu (2018) considers questions relating to making the gradient norm small when working with stochastic gradients, and provides an improved rate. We return to this criterion in Section 7.
In terms of oracle models, note that both this paper, as well as other results (Lacoste-Julien et al., 2012; Rakhlin et al., 2012; Bubeck, 2014), work in an oracle model that assumes bounded variance of stochastic gradients or similar assumptions. There is an alternative oracle model for analyzing SGD as followed in papers includingBach & Moulines (2013); Bach (2014); Jain et al. (2017) which is arguably more reflective of SGD’s behavior in practice. For more details, refer to Jain et al. (2017). It is an important direction to prove the results of this paper working in the alternative practically more applicable oracle model.
Efforts in practice: As highlighted in the introduction, practical efforts in stochastic optimization have diverged from the classical theory of stochastic approximation, with several deep learning
libraries like pytorch 5 providing unconventional alternatives such as cosine/sawtooth/dev set decay schemes, or even exponentially decaying learning rate schemes. In fact, a natural scheme used in training convolutional neural networks for vision is where the learning rate is cut by a constant factor after a certain number of epochs. Such schemes are essentially discretized variants of exponentially decaying learning rate schedules. We note that there are other learning rate schedules that have been recently proposed such as sgd with warm restarts (Loshchilov & Hutter, 2016), oscillating learning rates (Smith & Topin, 2017) etc., that are unconventional and have attracted a fair bit of attention. Furthermore, exponential learning rates appear to be considered in more recent NLP papers (see for e.g., Krishnamurthy et al. (2017)) 6.
3 PROBLEM SETUP
Notation: We represent scalars with normal font a, b, L etc., vectors with boldface lowercase characters a,b etc. and matrices with boldface uppercase characters A,B etc. We represent positive semidefinite (PSD) ordering between two matrices using . The symbol & represents that the direction of inequality holds for some universal constant.
Our theoretical results focus on the following additive noise stochastic linear regression problem. We present the setup and associated notation in this section. We wish to solve:
min w∈Rd
f(w) where f(w) def= 1
2 w>Hw −w>b
for some positive definite matrix H and vector b.7 We denote the smallest and largest eigenvalues of H by µ > 0 and L > 0. κ def= Lµ denotes the condition number of H. We have access to a stochastic gradient oracle which gives us ∇̂f(w) = ∇f(w) + e, where e is a random vector satisfying8 E [e] = 0 and E [ ee> ] = σ2H.
Given an initial point w0 and step size sequence ηt, the SGD algorithm proceeds with the update
wt = wt−1 − ηt∇̂f(wt−1) = wt−1 − ηt (∇f(wt−1) + et) , where et are independent for various t and satisfy the above mean and variance conditions.
Let w∗ def= arg minw∈Rd f(w). The suboptimality of a point w is given by f(w) − f(w∗). It is well known that given t accesses to the stochastic gradient oracle above, any algorithm that uses these stochastic gradients and outputs ŵt has suboptimality that is lower bounded by σ 2d t . More concretely (Van der Vaart, 2000), we have that
lim t→∞ E [f(ŵt)]− f(w∗) σ2d/t ≥ 1 .
Moreover there exist schemes that achieve this rate of (1 + o(1)) σ 2d t e.g., constant step size SGD with averaging (Polyak & Juditsky, 1992). This rate of σ2d/t is called the statistical minimax rate.
4 COMPARISON BETWEEN POLYNOMIAL DECAY SCHEMES VS CONSTANT AND CUT SCHEMES
In this section, we will show that polynomial decay schemes are suboptimal compared to the statistical minimax rate by at least a factor of κ while constant and cut schemes are suboptimal by at most a factor of log κ.
5see https://pytorch.org/docs/stable/optim.html for a complete list of alternatives 6Refer to their JSON file https://github.com/allenai/allennlp/blob/master/
training_config/wikitables_parser.jsonnet 7Any linear least squares 1
2n ∑n i=1 ( x>i w − yi )2 can be written as above with H def= 1 n ∑ i xix > i and
b def = 1
n ∑ i yixi.
8While this might seem very special, this is indeed a fairly natural scenario. For instance, in stochastic linear regression with independent additive noise, i.e., yt = x>t w∗+ t where t is a random variable independent of xt and E [ t] = 0 and E [ 2t ] = σ2, the noise in the gradient has this property. On the other hand, the results in
this paper can also be generalized to the setting where E [ ee> ] = V for some arbitrary matrix V. However, error covariance of σ2H significantly simplifies exposition.
4.1 SUBOPTIMALITY OF POLYNOMIAL DECAY SCHEMES
Our first result shows that there exist problem instances where all polynomial decay schemes i.e., those of the form ab+tα , for any choice of a, b and α are suboptimal by at least a factor of Ω(κ) compared to the statistical minimax rate. Theorem 1. There exists a problem instance such that the initial function value f(w0) ≤ σ2d, and for any fixed time T satisfying T ≥ κ2, for all a, b ≥ 0 and 0.5 ≤ α ≤ 1, and for the learning rate scheme ηt = ab+tα , we have E [f(wT )]− f(w ∗) ≥ κ32 · σ2d T .
4.2 SUBOPTIMALITY OF CONSTANT AND CUT SCHEME
Our next result shows that there exist constant and cut schemes that achieve statistical minimax rate upto a multiplicative factor of only log κ log2 T . Theorem 2. For any problem and fixed time horizon T > κ log(κ), there exists a constant and cut learning rate scheme that achieves E [f(wT )]− f(w∗) ≤ f(w0)−f(w ∗) T 3 + 2 log κ · log 2 T · σ 2d T .
We will now consider an exponential decay scheme (in contrast to polynomial ones from Section 4.1) which is a smoother version of constant and cut scheme. We show that the same result above for constant and cut scheme can also be extended to the exponential decay scheme. Theorem 3. For any problem and fixed horizon T , there exist constants a and b such that learning rate scheme of ηt = b ·exp (−at) achieves E [f(wT )]−f(w∗) ≤ f(w0)−f(w ∗) T 2−(1/100) +log κ · log T · σ 2d T .
The above results show that constant and cut as well as exponential decay schemes, that depend on the time horizon, are much better than polynomial decay schemes. Between these, exponential decay schemes are smoother versions of constant and cut schemes, and so one would hope that they might have better performance than constant and cut schemes – we do see a log T difference in our bounds. One unsatisfying aspect of the above results is that the rate behaves as log TT , which is asymptotically worse than the statistical rate of 1T . It turns out that it is indeed possible to improve the rate to 1 T using a more sophisticated scheme. The main idea is to use constant and polynomial schemes in the beginning and then switch to constant and cut (or exponential decay) scheme later. To the best of our knowledge, these kind of schemes have never been considered in the stochastic optimization literature before. Using this learning rate sequence successively for increasing time horizons would lead to oscillating learning rates. We leave a complete analysis of oscillating learning rates (for moving time horizon) to future work. Theorem 4. Fix κ ≥ 2. For any problem and fixed time horizon T/ log T > 5κ, there exists a learning rate scheme that achieves E [f(wT )]− f(w∗) ≤ f(w0)−f(w ∗) T 3 + 50 log2 κ · σ2d T .
5 INFINITE HORIZON SETTING
In this section we show a fundamental limitation of the SGD algorithm. First we will prove that the SGD algorithm, for any learning rate sequence, needs to query a point with suboptimality more than Ω(κ/ log κ) · σ2d/T for infinitely many time steps T . Theorem 5. There exists a universal constant C > 0 such that for any SGD algorithm with ηt ≤ 1/2κ for all t9, we have lim supT→∞ E[f(wT )]−f(w∗) (σ2d/T ) ≥ κ C log(κ+1) .
Next we will show that in some sense the “fraction” of query points that has value more than τσ2/T is at least Ω(1/τ) when τ is smaller than the threshold in Theorem 5. Theorem 6. There exists universal constants C1, C2 > 0 such that for any τ ≤ κCC1 log(κ+1) where C is the constant in Theorem 5, for any SGD algorithm and any number of iteration T > 0 there exists a T ′ ≥ T such that for any T̃ ∈ [T ′, (1 + 1/C2τ)T ′] we have E[f(wT̃ )]−f(w ∗)
(σ2d/T̃) ≥ τ .
Finally, we now show that there are constant and cut or exponentially decaying schemes that achieve the statistical minimax rate up to a factor of log κ log2 T in the lim inf sense.
9Learning rate more than 2/κ will make the algorithm diverge.
Theorem 7. There exists an absolute constant C and a constant and cut learning rate scheme that obtains lim infT→∞ E[f(wT )]−f(w∗) (σ2d log2 T/T ) ≤ C log κ.
Similar results can be obtained for the exponential decay scheme of Theorems 3 and 4 with moving time horizon. However the resultant learning rates might have oscillatory behavior. This might partly explain the benefits of oscillating learning rates observed in practice (Smith & Topin, 2017).
6 EXPERIMENTAL RESULTS
We present experimental validation of our claims through controlled synthetic experiments on a twodimensional quadratic objective and on a real world non-convex optimization problem of training a residual network on the cifar-10 dataset, to illustrate the shortcomings of the traditional stochastic approximation perspective (and the advantages of non-convergent exponentially decaying and oscillating learning rate schemes) for a realistic problem encountered in practice. Complete details of experimental setup are given in Appendix D.
6.1 SYNTHETIC EXPERIMENTS: TWO-DIMENSIONAL QUADRATIC OBJECTIVE
We consider the problem of optimizing a two-dimensional quadratic objective, similar in spirit as what is considered in the theoretical results of this paper. In particular, for a two-dimensional quadratic, we have two eigenvalues, one of magnitude κ and the other being 1. We vary our condition number κ ∈ {50, 100, 200} and use a total of 200κ iterations for optimization. The results expressed in this section are obtained by averaging over two random seeds. The learning rate schemes we search over are:
ηt = η0
1 + b · t (1) ηt =
η0
1 + b √ t
(2) ηt = η0 · exp (−b · t). (3)
For the schemes detailed above, there are two parameters that need to be searched over: (i) the starting learning rate η0 and, (ii) the decay factor b. We perform a grid search over both these parameters and choose ones that yield the best possible final error at a given end time (i.e. 200κ). We also make sure to extend the grid should a best performing grid search parameter fall at the edge of the grid so that all presented results lie in the interior of our final grid searched parameters.
We will present results for the following experiments: (i) behavior of the error of the final iterate of the SGD method with the three learning rate schemes (1),(2), and (3) as we vary the condition number, and (ii) how the exponentially decaying learning rate scheme (3) optimized for a shorter time horizon behaves for a longer horizon.
For the variation of the final iterate’s excess risk when considered with respect to the condition number (Figure 1), we note that polynomially decaying schemes have excess risk that scales linearly with condition number, corroborating Theorem 1. In contrast, exponentially decaying learning rate scheme admits excess risk that nearly appears to be a constant and corroborates Theorem 3. Finally, we note that the learning rate schedule that offers the best possible error in 50κ or 100κ steps does not offer the best error at 200κ steps (Table 1).
6.2 NON-CONVEX OPTIMIZATION: TRAINING A RESIDUAL NET ON CIFAR-10
We consider here the task of training a 44−layer deep residual network (He et al., 2016b) with pre-activation blocks (He et al., 2016a) (dubbed preresnet-44) for classifying images in the cifar10 dataset. The code for implementing the network employed in this paper can be found here10. For all the experiments, we use the Nesterov’s Accelerated gradient method (Nesterov, 1983) implemented in pytorch 11 with a momentum set to 0.9 and batchsize set to 128, total number of training epochs set to 100, `2 regularization set to 0.0005.
Our experiments are based on grid searching for the best learning rate decay scheme on four parametric family of learning rate schemes described above 1,2,3; all gridsearches are performed on a separate validation set (obtained by setting aside one-tenth of the training dataset = 5000 images) and with models trained on the remaining 45000 images. For presenting the final numbers in the plots/tables, we employ the best hyperparameters from the validation stage and train it on the entire 50, 000 images and average results run with 10 different random seeds. The parameters for gridsearches and related details are presented in Appendix D. Furthermore, just as with the synthetic experiments, we always extend the grid so that the best performing grid search parameter lies in the interior of our grid search.
Comparison between different schemes: Figure 2 and Table 2 present a comparison of the performance of the three schemes (1)-(3). They clearly demonstrate that the best exponential scheme outperforms the best polynomial schemes.
Hyperparameter selection using truncated runs: Figure 3 and Tables 3 and 4 present a comparison of the performance of three exponential decay schemes each of which has the best performance at 33, 66 and 100 epochs respectively. The key point to note is that best performing hyperparameters at 33 and 66 epochs are not the best performing at 100 epochs (which is made stark from the perspective of the validation error). This demonstrates that selecting hyper parameters using truncated runs, which has been proposed in some recent efforts such as hyperband (Li et al., 2017), might necessitate rethinking.
10https://github.com/D-X-Y/ResNeXt-DenseNet 11https://github.com/pytorch
7 CONCLUSIONS AND DISCUSSION
The main contribution of this work shows that the picture of learning rate scheduling is far more nuanced than suggested by prior theoretical results, where we do not even need to move to nonconvex optimization to show other learning rate schemes can be far more effective than the standard polynomially decaying rates considered in theory.
Is quadratic loss minimization special? One may ask if there is something particularly special about why the minimax rates are different for quadratic loss minimization as opposed to more general convex (and non-convex) optimization problems? Ideally, we would hope that our theoretical insights (and improvements) can be formally established in more general cases. Here, an alternative viewpoint is to consider gradient norm as a means to measure the progress of an algorithm. The recent work of Allen-Zhu (2018) shows marked improvements for making the gradient norm small (when working with stochastic gradients) for both convex and non-convex, in comparison to prior results. In particular, for the strongly convex case, Allen-Zhu (2018) provides results which have only a logarithmic dependency on κ, an exponential improvement over what is implied by standard analyses for the gradient norm (Lacoste-Julien et al., 2012; Rakhlin et al., 2012; Bubeck, 2014); Allen-Zhu (2018) also provides improvements for the smooth and non-convex cases. Thus, for the case of making the gradient norm small, there does not appear to be a notable discrepancy between the minimax rate of quadratic loss minimization in comparison to more general strongly convex (or smooth) convex optimization problems. Interestingly, the algorithm of Allen-Zhu (2018) provides a recursive regularization procedure that obtains an SGD procedure, where the doubling regularization can be viewed as being analogous to an exponentially decaying learning rate schedule. Further work in this direction may be promising in providing improved algorithms.
A PROOFS OF RESULTS IN SECTION 4.1
Proof of Theorem 1. The problem instance is simple. Let H = κ . . . 1
. . . , where the first d 2 diagonal entries are equal to κ and the remaining d 2 diagonal entries are equal to 1 and all the off
diagonal entries are equal to zero. Let us denote by v(i)t def = E
[( w
(i) t − (w∗)
(i) )2]
the variance in
the ith direction at time step t. Let the initialization be such that v(i)0 = σ 2/κ for i = 1, 2, ..., d/2 and v(i)0 = σ 2 for i = d/2 + 1, ..., d. This means that the variances for all directions with eigenvalue κ remain equal as t progresses and similarly for all directions with eigenvalue 1. We have
v (1) T def = E
[( w
(1) T − (w
∗) (1) )2] = T∏ j=1 (1− ηjκ)2 v(1)0 + κσ2 T∑ j=1 η2j T∏ i=j+1 (1− ηiκ)2 and
v (d) T def = E
[( w
(d) T − (w
∗) (d) )2] = T∏ j=1 (1− ηj)2 v(d)0 + σ2 T∑ j=1 η2j T∏ i=j+1 (1− ηi)2 .
We consider a recursion for v(i)t with eigenvalue λi (1 or κ). By the design of the algorithm, we know
v (i) t+1 = (1− ηtλi)2v (i) t + λiσ 2η2t .
Let s(η, λ) = λσ 2η2
1−(1−ηλ)2 be the solution to the stationary point equation x = (1 − ηλ) 2 + λσ2η2.
Intuitively if we keep using the same learning rate η, then v(i)t is going to converge to s(η, λi). Also note that s(η, λ) ≈ σ2η/2 when ηλ 1. We first prove the following claim showing that eventually the variance in direction i is going to be at least s(ηT , λi).
Claim 1. Suppose s(ηt, λi) ≤ v(i)0 , then v (i) t ≥ s(ηt, λi).
Proof. We can rewrite the recursion as
v (i) t+1 − s(ηt, λi) = (1− ηtλi)2(v (i) t − s(ηt, λi)).
In this form, it is easy to see that the iteration is a contraction towards s(ηt, λi). Further, v (i) t+1 − s(ηt, λi) and v (i) t − s(ηt, λi) have the same sign. In particular, let t0 be the first time such that s(ηt, λi) ≤ v(i)0 (note that ηt is monotone and so is s(ηt, λi)), it is easy to see that v (i) t ≥ v (i) 0 when t ≤ t0. Therefore we know v(i)t0 ≥ s(ηt0 , λi), by the recursion this implies v (i) t0+1
≥ s(ηt0 , λi) ≥ s(ηt0+1, λi). The claim then follows from a simple induction.
If s(ηT , λi) ≥ v(i)0 for i = 1 or i = d then the error is at least σ2d/2 ≥ κσ2d/T and we are done. Therefore we must have s(ηT , κ) ≤ v(1)0 = σ2/κ, and by Claim 1 we know v (1) T ≥ s(ηT , κ) ≥ σ2ηT /2. The function value is at least
E [f(wT )] ≥ d
2 · κv(1)T ≥ dκσ2ηT 4 .
To make sure E [f(wT )] ≤ dκσ 2ηT 32 we must have ηT ≤ 1
8T . Next we will show that when this happens, v(d)T must be large so the function value is still large.
We will consider two cases, in the first case, b ≥ Tα. Since 18T ≥ ηT = a b+Tα ≥ a 2b , we have a b ≤ 1 4T . Therefore v (d) T ≥ (1 − a b ) 2T v (d) 0 ≥ σ2/2, so the function value is at least E [f(wt)] ≥ d 2v (d) T ≥ dσ2 4 ≥ κdσ2 T , and we are done.
In the second case, b < Tα. Since 18T ≥ ηT = a b+Tα ≥ a 2Tα , we have a ≤ 0.25T α−1. The sum of learning rates satisfy
T∑ i=1 ηi ≤ T∑ i=1 a iα ≤ T∑ i=1 0.25i−1 ≈ 0.25 log T.
Here the second inequality uses the fact that Tα−1i−α ≤ i−1 when i ≤ T . Similarly, we also know∑T i=1 η 2 i ≤ ∑T i=1 0.25i
−2 ≤ π2/24. Using the approximation (1 − η)2 ≥ exp(−2η − 4η2) for η < 1/4, we get v(d)T ≥ exp(−2 ∑T i=1 ηi − 4 ∑T i=1 η 2 i )v (d) 0 ≥ σ2/5 √ T , so the function value is at least E [f(wt)] ≥ d2v (d) T ≥ dσ2 20 √ T ≥ κdσ 2
32T . This concludes the second case and proves the theorem.
B PROOFS OF RESULTS IN SECTION 4.2
Proof of Theorem 2. The learning rate scheme is as follows. Divide the total time horizon into log(κ) equal sized phases. In the `th phase, the learning rate to be used is log T ·log κ
2`·µ·T . Note that the learning rate in the first phase depends on strong convexity and that in the last phase depends on smoothness (since the last phase has ` = log κ). Recall the variance in the kth coordinate can be upper bounded by
v (k) T def = E
[( w
(k) T − (w
∗) (1) )2] ≤ T∏ j=1 ( 1− ηjλ(k) )2 v (1) 0 + λ (k)σ2 T∑ j=1 η2j T∏ i=j+1 ( 1− ηiλ(k) )2
≤ exp −2 T∑ j=1 ηjλ (k) v(1)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) . We will show that for every k, we have
v (k) T ≤ log κ · log T λ(k)T · σ2,
which directly implies the theorem. Now choose any k. Let `∗ denote the number satisfying 2` ∗ ·µ ≤ λ(k) < 2` ∗+1 · µ. Note that `∗ depends on k but we suppressed the dependence for notational simplicity.
v (k) T ≤ exp −2 T∑ j=1 ηjλ (k) v(k)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) ≤ exp ( −2 log T log κ
T · λ(k) · T log κ · 1 2`∗µ
)( v
(k) 0 + λ (k)σ2 · T log κ · `∗−1∑ `=1 η2`
)
+ λ(k)σ2 ( η2`∗ ·
1 1− exp ( −η`∗λ(k) ) + log κ∑ `=`∗+1 T log κ · ( log κ log T 2`µT )2)
≤ v (k) 0
T 3 + κσ2 · log
2 κ log2 T
µT 3 + η`∗σ
2 + σ2 · log κ log 2 T
T
log κ∑ `=`∗+1 1 2`µ
≤ v (k) 0
T 3 + σ2 · log κ log
2 T
T
( λ(k) + 1
2`∗µ ) ≤ v (k) 0
T 3 + 2 log κ · log2 T λ(k)T · σ2.
This finishes the proof.
Proof of Theorem 3. The learning rate scheme we consider is γt = γ0 · ct−1 with γ0 = log T/(µTe), Te = T/ log κ and c = (1 − 1/Te). Further, just as in previous lemmas, we consider a specific eigen direction λ(k) and write out the progress made along this direction by iteration say, T̂ ≤ T :
err(k) T̂ = T̂∏ t=1 (1− γtλ(k))2err(k)0 + (λ(k))2σ2 T̂∑ τ=1 γ2τ · T̂∏ t=τ+1 (1− γtλ(k))2
≤ exp −2λ(k)γ0 T̂∑ t=1 ct−1 err(k)0 + (λ(k))2σ2γ20c2 T̂∑ τ=1 c2τ exp −2λ(k)γ0 T̂∑ t=τ+1 ct−1 = exp ( −2λ
(k)γ0 1− c
· (1− cT̂ ) ) err(k)0 + (λ(k))2σ2γ20
c2
T̂∑ τ=1 c2τ exp ( −2λ (k)γ0 1− c · (cτ − cT̂ ) )
= exp ( 2λ(k)γ0 1− c · cT̂ ) · exp(−2λ(k)γ0 1− c ) err(k)0 + (λ(k))2σ2γ20 c2 T̂∑ τ=1 c2τ exp ( −2λ (k)γ0 1− c · (cτ ) )
Represent cτ = x and 2λ(k)γ0/(1 − c) = α. Substituting necessary values of the quantities, we have α = 2 log T · λ(k)/µ ≥ 2. Now, the second term is upper bounded using the corresponding integral, which is the following:
cT̂∑ x=c x2 exp (−αx) ≤ ∫ cT̂ 1 x2 exp (−αx)dx ≤ 1 α · ( 1 + 2 α + 2 α2 ) exp (−α).
Substituting this in the previous bound, we have:
err(k) T̂ ≤ exp
( −α(1− cT̂ ) ) · err(k)0 + (λ(k))2σ2γ20αc2 · ( 1 + √ 2 α )2 ≤ exp ( −α(1− cT̂ ) ) · ( err(k)0 + 16 (λ(k))2σ2γ20
α ) ≤ exp ( −α(1− cT̂ ) ) · ( err(k)0 + 16 (λ(k))2σ2 log T
µ2T 2e · 2(λ(k)/µ) ) = exp ( −α(1− cT̂ ) ) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e ) Now, setting T̂ = Te log(Cλ(k)/µ), and using 1 − a ≤ exp (−a), with C > 1 being some (large) universal constant, we have:
err(k) T̂ ≤ exp
( −2(λ(k)/µ) log T · (1− µ Cλ(k) ) ) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e ) ≤ 1 T 2−(1/C) · ( err(k)0 + 8 λ(k)σ2 log T
µT 2e
) (4)
Now, in order to argue the progress of the algorithm from T̂ + 1 to T , we can literally view the algorithm as starting with the iterate obtained from running for the first T̂ steps (thus satisfying the excess risk guarantee in equation 4) and then adding in variance by running for the duration of time between T̂ + 1 to T . For this part, we basically upper bound this behavior by first assuming that there is no contraction of the bias and then consider the variance introduced by running the algorithm from T̂ + 1 to T . This can be written as:
err(k)T ≤ T∏
t=T̂+1
(1− γtλ(k))2err(k)T̂ + (λ (k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2
≤ err(k) T̂ + (λ(k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2
Now, to consider bounding the variance of the process with decreasing sequence of learning rates, we will instead work with a constant learning rate and understand its variance:
(λ(k))2σ2 T−T̂∑ τ=1 γ2(1− γλ(k))2(T−T̂−τ) ≤ γ 2σ2(λ(k))2 (2− λ(k)γ)γλ(k)
≤ γσ2λ(k). What this implies in particular is that the variance is a monotonic function of the learning rate and thus the overall variance can be bounded using the variance of the process run with a learning rate of γT̂ . (λ(k))2σ2 T−T̂∑ τ=1 γ2 τ+T̂ T−T̂∏ t=τ+1 (1− γt+T̂λ (k))2 ≤ (λ(k))2σ2 T−T̂∑ t=τ+1 γ2 T̂ (1− γT̂λ (k))2 ≤ γT̂σ 2λ(k)
≤ log T µTe · µ Cλ(k) · σ2λ(k) ≤ σ 2 log T log κ
T Plugging this into equation 4 and summing over all directions, we have the desired result.
Proof of Theorem 4. The learning rate scheme is as follows.
We first break T into three equal sized parts. Let A = T/3 and B = 2T/3. In the first T/3 steps, we use a constant learning rate of 1/L. In the second T/3 steps, we use a polynomial decay learning rate ηA+t = 1µ(κ+t/2) . In the third T/3 steps, we break the steps into log2(κ) equal sized phases. In the `th phase, the learning rate to be used is 5 log2 κ 2`·µ·T . Note that the learning rate in the first phase depends on strong convexity and that in the last phase depends on smoothness (since the last phase has ` = log κ).
Recall the variance in the kth coordinate can be upper bounded by
v (k) T def = E
[( w
(k) T − (w
∗) (1) )2] ≤ T∏ j=1 ( 1− ηjλ(k) )2 v (1) 0 + λ (k)σ2 T∑ j=1 η2j T∏ i=j+1 ( 1− ηiλ(k) )2
≤ exp −2 T∑ j=1 ηjλ (k) v(1)0 + λ(k)σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiλ (k) . We will show that for every k, we have
v (k) T ≤
v (k) 0
T 3 +
50 log2 κ
λ(k)T · σ2., (5)
which directly implies the theorem.
We will consider the first T/3 steps. The guarantee that we will prove for these iterations is: for any t ≤ A, v(k)t ≤ (1− λ(k)/L)2tv (k) 0 + σ2 L .
This can be proved easily by induction. Clearly this is true when t = 0. Suppose it is true for t− 1, let’s consider step t. By recursion of v(k)t we know
v (k) t = (1− λ(k)/L)2v (k) t−1 + λ (k)σ2/L2
≤ (1− λ(k)/L)2tv(k)0 + σ2
L
( (1− λ(k)/L)2 + λ(k)/L ) ≤ (1− λ(k)/L)2tv(k)0 + σ2
L .
Here the second step uses induction hypothesis and the third step uses the fact that (1−x)2 +x ≤ 1 when x ∈ [0, 1]. In particular, since (1−λ(k)/L)2T/3 ≤ (1−1/κ)2T/3 ≤ (1−1/κ)3κ log T = 1/T 3, we know at the end of the first phase, v(k)A ≤ v (k) 0 /T 3 + σ 2 L .
In the second T/3 steps, the guarantee would be: for any t ≤ T/3, v(k)A+t ≤ v (k) 0 /T 3 + 2ηA+tσ 2.
We will again prove this by induction. The base case (t = 0) follows immediately from the guarantee for the first part. Suppose this is true for A + t − 1, let us consider A + t, again by recursion we know
v (k) A+t = (1− λ (k)ηA+t−1) 2v (k) A+t−1 + λ (k)σ2η2A+t−1 ≤ v(k)0 /T 3 + 2ηA+t−1σ2 ( (1− λ(k)ηA+t−1)2 + 1
2 λ(k)ηA+t−1 ) ≤ v(k)0 /T 3 + 2ηA+t−1σ2(1− 1
2 µηA+t−1) ≤ v(k)0 /T 3 + 2ηA+tσ2.
Here the last line uses the fact that 2ηA+t−1(1− 12µηA+t−1) ≤ 2ηA+tσ 2, which is easy to verify by our choice of η. Therefore, at the end of the second part, we have v(k)B ≤ v (k) 0 /T 3 + 2σ 2 µ(κ+T/6) .
Finally we will analyze the third part. Let T̂ = T/3 log2 κ, we will consider the variance v (k)
B+`T̂ at
the end of each phase. We will make the following claim by induction:
Claim 2. Suppose 2` · µ ≤ λ(k), then
v (k) B+`T̂ ≤ v(k)B exp(−3`) + 2T̂ η 2 `λ (k)σ2.
Proof. We will prove this by induction. When ` = 0, clearly we have v(k)B ≤ v (k) B so the claim is true. Suppose the claim is true for `− 1, we will consider what happens after the algorithm uses η` for T̂ steps. By the recursion of the variance we have
v (k) `T̂ ≤ v(k) (`−1)T̂ · exp(−2η` · λ(k)T̂ ) + T̂ η2`λ(k)σ2.
Since 2` · µ ≤ λ(k), we know exp(−2η` · λ(k)T̂ ) ≤ exp(−3). Therefore by induction hypothesis we have
v (k) B+`T̂ ≤ v(k)B exp(−3`) + exp(−3) · 2T̂ η 2 `−1λ (k) + T̂ η2`λ (k) ≤ v(k)B exp(−3`) + 2T̂ η 2 `λ (k).
This finishes the induction.
By Claim 2, Let `∗ denote the number satisfying 2` ∗ ·µ ≤ λ(k) < 2`∗+1 ·µ, by this choice we know µ/λ(k) ≥ 12 exp(−3` ?) we have
v (k) T ≤ v (k) B+`∗T̂ ≤ v(k)B exp(−3` ∗) + 2T̂ η2`∗λ (k)σ2
≤ v (k) 0
T 3 +
24σ2 λ(k)T + 50 log2 κ 3λ(k)T · σ2.
≤ v (k) 0
T 3 +
50 log2 κ
λ(k)T · σ2.
Therefore, the function value is bounded by E [f(wT )] = ∑d i=1 λ (k)v (k) T ≤ f(w0) T 3 + 50 log2 κ T · σ2d.
C PROOFS OF RESULTS IN SECTION 5
All of our counter-examples in this section are going to be the same simple function. Let H be a diagonal matrix with d/2 eigenvalues equal to κ and the other d/2 eigenvalues equal to 1. Intuitively, we will show that in order to have a small error in the first eigendirection (with eigenvalue κ), one need to set a small learning rate ηt which would be too small to achieve a small error in the second
eigendirection (with eigenvalue 1). As a useful tool, we will decompose the variance in the two directions corresponding to κ eigenvalue and 1 eigenvalue respectively as follows:
v (1) T def = E
[( w
(1) T − (w
∗) (1) )2] = T∏ j=1 (1− ηjκ)2 v(1)0 + κσ2 T∑ j=1 η2j T∏ i=j+1 (1− ηiκ)2
≥ exp −2 T∑ j=1 ηjκ v(1)0 + κσ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηiκ and (6) v
(2) T def = E
[( w
(2) T − (w
∗) (2) )2] = T∏ j=1 (1− ηj)2 v(2)0 + σ2 T∑ j=1 η2j T∏ i=j+1 (1− ηi)2
≥ exp −2 T∑ j=1 ηj v(2)0 + σ2 T∑ j=1 η2j exp −2 T∑ i=j+1 ηi . (7) Proof of Theorem 5. Fix τ = κ/C log(κ+ 1) where C is a universal constant that we choose later. We need to exhibit that the lim sup is larger than τ . For simplicity we will also round κ up to the nearest integer.
Let T be a given number. Our goal is to exhibit a T̃ > T such that flin(wT̃ ) (σ2/T̃) ≥ τ . Given the step size sequence ηt, consider the sequence of numbers T0 = T, T1, · · · , Tκ such that Ti is the first number that
1 κ ≤ Ti∑ t=Ti−1+1 ηt ≤ 3 κ .
Note that such a number always exists because all the step sizes are at most 2/κ. We will also let ∆i be Ti − Ti−1. Firstly, from (6) and (7), we see that ∑ t ηt = ∞. Otherwise, the bias will never decay to zero. If f(wTi−1+∆i) > τσ2d Ti−1+∆i for some i = 1, · · · , κ, we are done. If not, we obtain the following relations:
σ2 ∆1 ≤ σ2 ∆1∑ t=1 η2T0+t ≤ exp(3) κ · E [( w (1) T0+∆1 − (w∗)(1) )2] ≤ exp(3)flin(wT0+∆1) ≤ exp(3)τσ2 T0 + ∆1
⇒ T0 ≤ (exp(3)τ − 1) ∆1. Here the second inequality is based on (6). We will use C1 to denote exp(3). Similarly, we have
σ2 ∆2 ≤ σ2 ∆2∑ t=1 η2T1+t ≤ C1 κ E [( w (1) T1+∆2 − (w∗)(1) )2] ≤ C1flin(wT1+∆2) ≤ C1τσ 2 T1 + ∆2
⇒ T1 ≤ (C1τ − 1) ∆2 ⇒ T0 ≤ (C1τ − 1)2
C1τ ∆2.
Repeating this argument, we can show that
T = T0 ≤ (C1τ − 1)i
(C1τ) i−1 ∆i and Ti ≤
(C1τ − 1)j−i
(C1τ) j−i−1 ∆j ∀ i < j.
We will use i = 1 in particular, which specializes to
T1 ≤ (C1τ − 1)j−1
(C1τ) j−2 ∆j ∀ j ≥ 2.
Using the above inequality, we can lower bound the sum of ∆j as κ∑ j=2 ∆j ≥ T1 · κ∑ j=2 (C1τ) j−2 (C1τ − 1)j−1 ≥ T1 · 1 C1τ · κ∑ j=2 ( 1 + 1 C1τ )j−2 ≥ T1 · 1
C1τ · exp (κ/(C1τ)) . (8)
This means that
E [f(wTi)] ≥ d 2 · E [( w (2) Ti − (w∗)(2) )2] ≥ exp(−6)σ2d · ∆1∑ i=1 η2T+i
≥ exp(−6)σ 2d ∆1 ≥ exp(−6)σ 2d T1 ≥ exp (κ/(C1τ)− 3) C1τ · σ 2d∑κ j=2 ∆j ,
where we used (8) in the last step. Rearranging, we obtain
E [f(wTκ)] (σ2d/Tκ) ≥ exp (κ/(C1τ)− 3) C1τ .
If we choose a large enough C (e.g., 3C1), the right hand side is at least exp((C/C1) log(κ+1)−3) κ ≥ κ.
To prove Theorem 6, we rely on the following key lemma, which says if a query point wT is bad (in the sense that it has expected value more than 10τσ2d/T ), then it takes at least Ω(T/τ) steps to bring the error back down. Lemma 8. There exists universal constantsC1, C2 > 0 such that for any τ ≤ κCC1 log(κ+1) whereC is the constant in Theorem 5, suppose at step T , the query point wT satisfies f(wT ) ≥ C1τσ2d/T , then for all T̃ ∈ [T, (1 + 1C2τ )T ] we have E [f(wT̃ )] ≥ τσ 2d/T ≥ τσ2d/T̃ .
Proof of Lemma 8. Since f(wT ) ≥ C1τσ2d/T and f(wT ) = d 2 ( κ ( w (1) T − (w∗) (1) )2 + ( w (2) T − (w∗) (2) )2) , we know either ( w (1) T − (w∗) (1) )2 ≥
C1τσ 2/2κT or ( w (2) T − (w∗) (2) )2 ≥ C1τσ2/2T . Either way, we have a coordinate i with
eigenvalue λi (κ or 1) such that ( w (i) T − (w∗) (i) )2 ≥ C1τσ2/(2Tλi).
Similar as before, choose ∆ to be the first point such that
ηT+1 + ηT+2 + · · ·+ ηT+∆ ∈ [1/λi, 3/λi]. First, by (6) or (7), we know for any T ≤ T̃ ≤ T + ∆, E [( w (i)
T̃ − (w∗)(i)
)2] ≥
exp(−6)C1τσ2/(2λiT ) just by the first term. When we choose C1 to be large enough the contribution to function value by this direction alone is larger than τσ2/T . Therefore every query in [T, T + ∆] is still bad.
We will consider two cases based on the value of S2 := ∑T+∆ T̃=T+1 η2 T̃ .
If S2 ≤ C2τ/(λ2iT ) (where C2 is a large enough universal constant chosen later), then by CauchySchwartz we know
S2 ·∆ ≥ ( T+∆∑ T̃=T+1 ηT̃ ) 2 ≥ 1/λ2i .
Therefore ∆ ≥ T/C2τ , and we are done. If S2 > C2τ/(λ2iT ), by Equation (6) and (7) we know
E [(
w (i) T+∆ − (w
∗) (i) )2] ≥ σ2 T+∆∑ T̃=T+1 η2 T̃ exp −2λi T+∆∑ j=T̃+1 ηj ≥ exp(−6)σ2
T+∆∑ T̃=T+1 η2 T̃ ≥ exp(−6) · C2τσ2/(λ2iT ).
Here the first inequality just uses the second term in Equation (6) or (7), the second inequality is because ∑T+∆ j=T̃+1 ηj ≤ ∑T+∆ j=T+1 ηj ≤ 3/λi and the last inequality is just based on the value of S2. In this case as we can see as long as C2 is large enough, T + ∆ is also a point with E [f(wT+∆)] ≥
λiE [( w (i) T+∆ − (w∗) (i) )2] ≥ C1τσ2/(T + ∆), so we can repeat the argument there. Eventually
we either stop because we hit case 1: S2 ≤ C2τ/λ2iT or the case 2 S2 > C2τ/λ2iT happened more than T/C2τ times. In either case we know for any T̃ ∈ [T, (1 + 1/C2)T ] E [f(wT̃ )] ≥ τσ2/T ≥ τσ2/T̃ as the lemma claimed.
Theorem 6 is an immediate corollary of Theorem 5 and Lemma 8.
Proof of Theorem 7. This result follows by running the constant and cut scheme for a fixed time horizon T and then increasing the time horizon to κ ·T . The learning rate of the initial phase for the new T ′ = κ · T is 1/µT ′ = 1/µ · κT = 1/LT which is the final learning rate for time horizon T . Theorem 2 will then directly imply the current theorem.
D DETAILS OF EXPERIMENTAL SETUP
D.1 SYNTHETIC 2-D QUADRATIC EXPERIMENTS
As mentioned in the main paper, we consider three condition numbers namely κ ∈ {50, 100, 200}. We run all experiments for a total of 200κ iterations. The two eigenvalues of the Hessian are κ and 1 respectively, and noise level is σ2 = 1 and we average our results with two random seeds. All our grid search results are conducted on a 10× 10 grid of learning rates × decay factor and whenever a best run lands at the edge of the grid, the grid is extended so that we have the best run in the interior of the gridsearch.
For the O(1/t) learning rate, we search for decay parameter over 10−points logarithmically spaced between {1/(100κ), 3000/κ}. The starting learning rate is searched over 10 points logarithmically spaced between {1/(20κ), 1000/κ}.
For the O(1/ √
(t)) learning rate, the decay parameter is searched over 10 logarithmically spaced points between {100/κ, 200000.0/κ}. The starting learning rate is searched between {0.01, 2}. For the exponential learning rate schemes, the decay parameter is searched between {exp (−2/N), exp (−106/N)}. The learning rate is searched between {1/5000, 1/10}.
D.2 NON-CONVEX EXPERIMENTS ON CIFAR-10 DATASET WITH A 44-LAYER RESIDUAL NET
As mentioned in the main paper, for all the experiments, we use the Nesterov’s Accelerated gradient method (Nesterov, 1983) implemented in pytorch 12 with a momentum set to 0.9 and batchsize set to 128, total number of training epochs set to 100, `2 regularization set to 0.0005.
With regards to learning rates, we consider 10−values geometrically spaced as {1, 0.6, · · · , 0.01}. To set the decay factor for any of the schemes such as 1,2, and 3, we use the following rule. Suppose we have a desired learning rate that we wish to use towards the end of the optimization (say, something that is 100 times lower than the starting learning rate, which is a reasonable estimate of what is typically employed in practice), this can be used to obtain a decay factor for the corresponding decay scheme. In our case, we found it advantageous to use an additively spaced grid for the learning rate γt, i.e., one which is searched over a range {0.0001, 0.0002, · · · , 0.0009, 0.001, · · · , 0.009} at the 80th epoch, and cap off the minimum possible learning rate to be used to be 0.0001 to ensure that there is progress made by the optimization routine. For any of the experiments that yield the best performing gridsearch parameter that falls at the edge of the grid, we extend the grid to ensure that the finally chosen hyperparameter lies in the interior of the grid. All our gridsearches are run such that we separate a tenth of the training dataset as a validation set and train on the remaining 9/10th dataset. Once the best grid search parameter is chosen, we train on the entire training dataset and
12https://github.com/pytorch
evaluate on the test dataset and present the result of the final model (instead of choosing the best possible model found during the course of optimization). | 1. What are the main contributions and findings of the paper regarding learning rate proposals?
2. What are the strengths and weaknesses of the theoretical analysis, particularly in the proof of Theorem 3?
3. Do you have any concerns or suggestions regarding the experimental design and comparison with other learning rate schemes?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Review | Review
This work provides theoretical insights on recent learning rate proposals such as Cyclical Learning Rates (Smith et al.). The authors focus on stochastic approximation i.e. how large is the SGD loss as a function of condition number and horizon. The critical contribution is the theoretical benefit of oscillating learning rates over more traditional learning rate schemes. Authors provide novel upper/lower bounds to establish benefit of oscillating LR, support their theory with experiments and provide insights on finite horizon learning rate selection. An important drawback is that results only apply to linear regression which is a fairly simple setup.
I have two important comments regarding this work:
1) I believe proof of Theorem 3 has a bug. In the proof, authors use the inequality
(1-gamma_t lambda^k)^2 < exp(-2lambda^k gamma_t).
Obviously this can only be correct for gamma_t lambda^k<1. However, checking the setup of the problem, it can be seen that for largest eigenvalue and gamma_0, ignoring log factors:
gamma_0L = L/(mu T_e)=kappa / T_e=kappa/T.
Since, no restriction is imposed on T, gamma_0L can be as large as O(kappa) and invalidates the above inequality. So T should be T>O(kappa). I am not sure if this affects the overall statement or the remaining argument.
2) The paper can benefit from more detailed experiments (e.g. Figs 1 and 2). Arguably the most obvious baseline is "constant learning rate". However, authors compare to 1/T or 1/sqrt(T) learning rates. It is not at all clear from current experiments, if the proposed approach beats a good constant LR choice.
I am happy to increase my score if the comments above are addressed. |
ICLR | Title
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Abstract
A recent line of research on deep learning focuses on the extremely overparameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target error ́1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ́1. Our results push the study of over-parameterized deep neural networks towards more practical settings.
1 INTRODUCTION
Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications. However, the success of deep learning has not been well-explained in theory. It remains mysterious why standard optimization algorithms tend to find a globally optimal solution, despite the highly non-convex landscape of the training loss function. Moreover, despite the extremely large amount of parameters, deep neural networks rarely over-fit, and can often generalize well to unseen data and achieve good test accuracy. Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory.
Recent breakthroughs have shed light on the optimization and generalization of deep neural networks (DNNs) under the over-parameterized setting, where the hidden layer width is extremely large (much larger than the number of training examples). It has been shown that with the standard random initialization, the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel (NTK) (Jacot et al., 2018; Arora et al., 2019b). In the neural tangent kernel regime (or lazy training regime (Chizat et al., 2019)), the neural network function behaves similarly as its first-order Taylor expansion at initialization (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019b; Cao and Gu, 2019), which enables feasible optimization and generalization analysis. In terms of optimization, a line of work (Du et al., 2019b; Allen-Zhu et al., 2019b; Zou et al., 2019; Zou and Gu, 2019) proved that for sufficiently wide neural networks, (stochastic) gradient descent (GD/SGD) can successfully find a global optimum of the training loss function. For generalization, Allen-Zhu et al. (2019a); Arora et al. (2019a); Cao and Gu (2019) established generalization bounds of neural networks trained with (stochastic) gradient descent, and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space (RKHS) or the corresponding random feature function class.
Although existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks, they require the neural network to be extremely wide. ∗Equal contribution.
The typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1. As there still remains a huge gap between such network width requirement and the practice, many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization (Oymak and Soltanolkotabi, 2019; Zou and Gu, 2019; Kawaguchi and Huang, 2019; Bai and Lee, 2019). For two-layer ReLU networks, a recent work (Ji and Telgarsky, 2020) showed that when the training data are well separated, polylogarithmic width is sufficient to guarantee good optimization and generalization performances. However, their results cannot be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous, which cannot be satisfied by DNNs. Therefore, whether deep neural networks can be learned with such a mild over-parameterization is still an open problem.
In this paper, we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs. In particular, unlike the existing works that require the DNNs to behave very close to a linear model (up to some small approximation error), we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs. Thanks to the relaxed requirement on the linear approximation error, a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved. We summarize our contributions as follows:
• We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class (Cao and Gu, 2019), a set of linear functions over random features. Specifically, we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class, where R is the radius of the NTRF function class.
• We also establish the generalization guarantees for both GD and SGD in the same setting. Specifically, we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q, while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover, we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively, which are tighter than existing bounds for learning deep ReLU networks (Cao and Gu, 2019), and match the best results when reduced to the two-layer cases (Arora et al., 2019b; Ji and Telgarsky, 2020).
• We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature. We show if a large fraction of the training data are well separated, the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q, which has a logarithmic dependence on the target error and sample size n. Compared with existing results (Cao and Gu, 2020; Ji and Telgarsky, 2020) which require all training data points to be separated in the NTK regime, our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data.
For the ease of comparison, we summarize our results along with the most related previous results in Table 1, in terms of data assumption, the over-parameterization condition and sample complexity. It can be seen that under data separation assumption (See Sections 4.1, 4.2), our result improves existing results for learning deep neural networks by only requiring a polylogpn, ´1q network width. Notation. For two scalars a and b, we denote a^ b “ minta, bu. For a vector x P Rd we use }x}2 to denote its Euclidean norm. For a matrix X, we use }X}2 and }X}F to denote its spectral norm and Frobenius norm respectively, and denote by Xij the entry of X at the i-th row and j-th column. Given two matrices X and Y with the same dimension, we denote xX,Yy “ ř
i,jXijYij .
Given a collection of matrices W “ tW1, ¨ ¨ ¨ ,WLu P bLl“1Rmlˆm 1 l and a function fpWq over bLl“1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl“1. We also denote BpW, τq “ W1 : maxlPrLs }W1l ´Wl}F ď τ ( for τ ě 0. For two collection of matrices A “ tA1, ¨ ¨ ¨ ,Anu, B “ tB1, ¨ ¨ ¨ ,Bnu, we denote xA,By “
řn i“1xAi,Biy and }A}2F “ řn i“1 }Ai}2F .
Algorithm 1 Gradient descent with random initialization Input: Number of iterations T , step size η, training set S “ tpxi, yiqni“1u, initialization Wp0q for t “ 1, 2, . . . , T do
Update Wptq “Wpt´1q ´ η ¨∇WLSpWpt´1qq. end for Output: Wp0q, . . . ,WpT q.
Given two sequences txnu and tynu, we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1, xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2, and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3, C4 ą 0. We also use rOp¨q, rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively. Additionally, we denote xn “ polypynq if xn “ OpyDn q for some positive constant D, and xn “ polylogpynq if xn “ polyplogpynqq.
2 PRELIMINARIES ON LEARNING NEURAL NETWORKS
In this section, we introduce the problem setting in this paper, including definitions of the neural network and loss functions, and the training algorithms, i.e., GD and SGD with random initialization.
Neural network function. Given an input x P Rd, the output of deep fully-connected ReLU network is defined as follows,
fWpxq “ m1{2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q, where W1 P Rmˆd, W2, ¨ ¨ ¨ ,WL´1 P Rmˆm, WL P R1ˆm, and σpxq “ maxt0, xu is the ReLU activation function. Here, without loss of generality, we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers, as long as the smallest width satisfies our overparameterization condition. We denote the collection of all weight matrices as W “ tW1, . . . ,WLu. Loss function. Given training dataset txi, yiui“1,...,n with input xi P Rd and output yi P t´1,`1u, we define the training loss function as
LSpWq “ 1
n
n ÿ i“1 LipWq,
where LipWq “ ` ` yifWpxiq ˘ “ log ` 1` expp´yifWpxiqq ˘ is defined as the cross-entropy loss.
Algorithms. We consider both GD and SGD with Gaussian random initialization. These two algorithms are displayed in Algorithms 1 and 2 respectively. Specifically, the entries in Wp0q1 , ¨ ¨ ¨ ,W p0q L´1 are generated independently from univariate Gaussian distributionNp0, 2{mq and the entries in Wp0qL are generated independently from Np0, 1{mq. For GD, we consider using the full gradient to update the model parameters. For SGD, we use a new training data point in each iteration.
Note that our initialization method in Algorithms 1, 2 is the same as the widely used He initialization (He et al., 2015). Our neural network parameterization is also consistent with the parameterization used in prior work on NTK (Jacot et al., 2018; Allen-Zhu et al., 2019b; Du et al., 2019a; Arora et al., 2019b; Cao and Gu, 2019).
Algorithm 2 Stochastic gradient desecent (SGD) with random initialization Input: Number of iterations n, step size η, initialization Wp0q for i “ 1, 2, . . . , n do
Draw pxi, yiq from D and compute the corresponding gradient∇WLipWpi´1qq. Update Wpiq “Wpi´1q ´ η ¨∇WLipWpi´1qq.
end for Output: Randomly choose xW uniformly from tWp0q, . . . ,Wpn´1qu.
3 MAIN THEORY
In this section, we present the optimization and generalization guarantees of GD and SGD for learning deep ReLU networks. We first make the following assumption on the training data points. Assumption 3.1. All training data points satisfy }xi}2 “ 1, i “ 1, . . . , n.
This assumption has been widely made in many previous works (Allen-Zhu et al., 2019b;c; Du et al., 2019b;a; Zou et al., 2019) in order to simplify the theoretical analysis. This assumption can be relaxed to be upper bounded and lower bounded by some constant.
In the following, we give the definition of Neural Tangent Random Feature (NTRF) (Cao and Gu, 2019), which characterizes the functions learnable by over-parameterized ReLU networks.
Definition 3.2 (Neural Tangent Random Feature, (Cao and Gu, 2019)). Let Wp0q be the initialization weights, and FWp0q,Wpxq “ fWp0qpxq ` x∇fWp0qpxq,W ´Wp0qy be a function with respect to the input x. Then the NTRF function class is defined as follows
FpWp0q, Rq “ FWp0q,Wp¨q : W P BpWp0q, R ¨m´1{2q ( .
The function class FWp0q,Wpxq consists of linear models over random features defined based on the network gradients at the initialization. Therefore it captures the key “almost linear” property of wide neural networks in the NTK regime (Lee et al., 2019; Cao and Gu, 2019). In this paper, we use the NTRF function class as a reference class to measure the difficulty of a learning problem. In what follows, we deliver our main theoretical results regarding the optimization and generalization guarantees of learning deep ReLU networks. We study both GD and SGD with random initialization (presented in Algorithms 1 and 2).
3.1 GRADIENT DESCENT
The following theorem establishes the optimization guarantee of GD for training deep ReLU networks for binary classification. Theorem 3.3. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ over the initialization, GD with step size η “ ΘpL´1m´1q can train a neural network to achieve at most 3 NTRF training loss within T “ O `
L2R2 ´1NTRF ˘ iterations.
Theorem 3.3 shows that the deep ReLU network trained by GD can compete with the best function in the NTRF function class FpWp0q, Rq if the network width has a polynomial dependency in R and L and a logarithmic dependency in n and 1{δ. Moreover, if the NTRF function class with R “ rOp1q can learn the training data well (i.e., NTRF is less than a small target error ), a polylogarithmic (in terms of n and ´1) network width suffices to guarantee the global convergence of GD, which directly improves over-paramterization condition in the most related work (Cao and Gu, 2019). Besides, we remark here that this assumption on the NTRF function class can be easily satisfied when the training data admits certain separability conditions, which we discuss in detail in Section 4.
Compared with the results in (Ji and Telgarsky, 2020) which give similar network width requirements for two-layer networks, our result works for deep networks. Moreover, while Ji and Telgarsky (2020)
essentially required all training data to be separable by a function in the NTRF function class with a constant margin, our result does not require such data separation assumptions, and allows the NTRF function class to misclassify a small proportion of the training data points∗.
We now characterize the generalization performance of neural networks trained by GD. We denote L0´1D pWq “ Epx,yq„Dr1tfWpxq ¨ y ă 0us as the expected 0-1 loss (i.e., expected error) of fWpxq. Theorem 3.4. Under the same assumptions as Theorem 3.3, with probability at least 1´δ, the iterate Wptq of Algorithm 1 satisfies that
L0´1D pW ptqq ď 2LSpWptqq ` rO
˜
4LL2R
c
m n ^ ˜ L3{2R? n ` L 11{3R4{3 m1{6 ¸¸ `O ˜ c logp1{δq n ¸
for all t “ 0, . . . , T .
Theorem 3.4 shows that the test error of the trained neural network can be bounded by its training error plus statistical error terms. Note that the statistical error terms is in the form of a minimum between two terms 4LL2R a m{n and L3{2R{ ? n` L11{3R4{3{m1{6. Depending on the network width m, one of these two terms will be the dominating term and diminishes for large n: (1) if m “ opnq, the statistical error will be 4LL2R a
m{n, and diminishes as n increases; and (2) if m “ Ωpnq, the statistical error is L3{2R{ ? n` L11{3R4{3{m1{6, and again goes to zero as n increases. Moreover,
in this paper we have a specific focus on the setting m “ rOp1q, under which Theorem 3.4 gives a statistical error of order rOpn´1{2q. This distinguishes our result from previous generalization bounds for deep networks (Cao and Gu, 2020; 2019), which cannot be applied to the setting m “ rOp1q. We note that for two-layer ReLU networks (i.e., L “ 2) Ji and Telgarsky (2020) proves a tighter rOp1{n1{2q generalization error bound regardless of the neural networks width m, while our result (Theorem 3.4), in the two-layer case, can only give rOp1{n1{2q generalization error bound when m “ rOp1q or m “ rΩpn3q. However, different from our proof technique that basically uses the (approximated) linearity of the neural network function, their proof technique largely relies on the 1-homogeneous property of the neural network, which restricted their theory in two-layer cases. An interesting research direction is to explore whether a rOp1{n1{2q generalization error bound can be also established for deep networks (regardless of the network width), which we will leave it as a future work.
3.2 STOCHASTIC GRADIENT DESCENT
Here we study the performance of SGD for training deep ReLU networks. The following theorem establishes a generalization error bound for the output of SGD. Theorem 3.5. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ, SGD with step size η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ achieves
ErL0´1D pxWqs ď 8L2R2 n ` 8 logp2{δq n ` 24 NTRF,
where the expectation is taken over the uniform draw of xW from tWp0q, . . . ,Wpn´1qu.
For any ą 0, Theorem 3.5 gives a rOp ´1q sample complexity for deep ReLU networks trained with SGD to achieveOp NTRF` q test error. Our result extends the result for two-layer networks proved in (Ji and Telgarsky, 2020) to multi-layer networks. Theorem 3.5 also provides sharper results compared with Allen-Zhu et al. (2019a); Cao and Gu (2019) in two aspects: (1) the sample complexity is improved from n “ rOp ´2q to n “ rOp ´1q; and (2) the overparamterization condition is improved from m ě polyp ´1q to m “ rΩp1q. ∗A detailed discussion is given in Section 4.2.
4 DISCUSSION ON THE NTRF CLASS
Our theoretical results in Section 3 rely on the radius (i.e.,R) of the NTRF function classFpWp0q, Rq and the minimum training loss achievable by functions in FpWp0q, Rq, i.e., NTRF. Note that a larger R naturally implies a smaller NTRF, but also leads to worse conditions on m. In this section, for any (arbitrarily small) target error rate ą 0, we discuss various data assumptions studied in the literature under which our results can lead to Op q training/test errors, and specify the network width requirement.
4.1 DATA SEPARABILITY BY NEURAL TANGENT RANDOM FEATURE
In this subsection, we consider the setting where a large fraction of the training data can be linearly separated by the neural tangent random features. The assumption is stated as follows.
Assumption 4.1. There exists a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu satisfying řL l“1 }U˚l }2F “ 1, such that for at least p1´ ρq fraction of training data we have
yix∇fWp0qpxiq,U˚y ě m1{2γ,
where γ is an absolute positive constant† and ρ P r0, 1q.
The following corollary provides an upper bound of NTRF under Assumption 4.1 for some R.
Proposition 4.2. Under Assumption 4.1, for any , δ ą 0, if R ě C “ log1{2pn{δq ` logp1{ q ‰
{γ for some absolute constant C, then with probability at least 1´ δ,
NTRF :“ inf FPFpWp0q,Rq
n´1 n ÿ
i“1 ` ` yiF pxiq ˘ ď ` ρ ¨OpRq.
Proposition 4.2 covers the setting where the NTRF function class is allowed to misclassify training data, while most of existing work typically assumes that all training data can be perfectly separated with constant margin (i.e., ρ “ 0) (Ji and Telgarsky, 2020; Shamir, 2020). Our results show that for sufficiently small misclassification ratio ρ “ Op q, we have NTRF “ rOp q by choosing the radius parameter R logarithimic in n, δ´1, and ´1. Substituting this result into Theorems 3.3, 3.4 and 3.5, it can be shown that a neural network with width m “ polypL, logpn{δq, logp1{ qq ˘ suffices to guarantee good optimization and generalization performances for both GD and SGD. Consequently, we can obtain that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.2 DATA SEPARABILITY BY SHALLOW NEURAL TANGENT MODEL
In this subsection, we study the data separation assumption made in Ji and Telgarsky (2020) and show that our results cover this particular setting. We first restate the assumption as follows.
Assumption 4.3. There exists up¨q : Rd Ñ Rd and γ ě 0 such that }upzq}2 ď 1 for all z P Rd, and
yi
ż
Rd σ1pxz,xiyq ¨ xupzq,xiydµNpzq ě γ
for all i P rns, where µN p¨q denotes the standard normal distribution.
Assumption 4.3 is related to the linear separability of the gradients of the first layer parameters at random initialization, where the randomness is replaced with an integral by taking the infinite width limit. Note that similar assumptions have also been studied in (Cao and Gu, 2020; Nitanda and Suzuki, 2019; Frei et al., 2019). The assumption made in (Cao and Gu, 2020; Frei et al., 2019) uses gradients with respect to the second layer weights instead of the first layer ones. In the following, we mainly focus on Assumption 4.3, while our result can also be generalized to cover the setting in (Cao and Gu, 2020; Frei et al., 2019).
†The factor m1{2 is introduced here since }∇Wp0qfpxiq}F is typically of order Opm 1{2q.
In order to make a fair comparison, we reduce our results for multilayer networks to the two-layer setting. In this case, the neural network function takes form
fWpxq “ m1{2W2σpW1xq. Then we provide the following proposition, which states that Assumption 4.3 implies a certain choice of R “ rOp1q such the the minimum training loss achieved by the function in the NTRF function class FpWp0q, Rq satisfies NTRF “ Op q, where is the target error. Proposition 4.4. Suppose the training data satisfies Assumption 4.3. For any , δ ą 0, let R “ C “ logpn{δq ` logp1{ q ‰
{γ for some large enough absolute constant C. If the neural network width satisfies m “ Ω ` logpn{δq{γ2 ˘
, then with probability at least 1 ´ δ, there exist FWp0q,Wpxiq P FpWp0q, Rq such that ` `
yi ¨ FWp0q,Wpxiq ˘ ď ,@i P rns.
Proposition 4.4 shows that under Assumption 4.3, there exists FWp0q,Wp¨q P FpWp0q, Rq with R “ rOp1{γq such that the cross-entropy loss of FWp0q,Wp¨q at each training data point is bounded by . This implies that NTRF ď . Moreover, by applying Theorem 3.3 with L “ 2, the condition on the neural network width becomes m “ rΩp1{γ8q‡, which matches the results proved in Ji and Telgarsky (2020). Moreover, plugging these results on m and NTRF into Theorems 3.4 and 3.5, we can conclude that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.3 CLASS-DEPENDENT DATA NONDEGENERATION
In previous subsections, we have shown that under certain data separation conditions NTRF can be sufficiently small while the corresponding NTRF function class has R of order rOp1q. Thus neural networks with polylogarithmic width enjoy nice optimization and generalization guarantees. In this part, we consider the following much milder data separability assumption made in Zou et al. (2019). Assumption 4.5. For all i ‰ i1 if yi ‰ yi1 , then }xi ´ xj}2 ě φ for some absolute constant φ.
In contrast to the conventional data nondegeneration assumption (i.e., no duplicate data points) made in Allen-Zhu et al. (2019b); Du et al. (2019b;a); Zou and Gu (2019)§, Assumption 4.5 only requires that the data points from different classes are nondegenerate, thus we call it class-dependent data nondegeneration.
We have the following proposition which shows that Assumption 4.5 also implies the existence of a good function that achieves training error, in the NTRF function class with a certain choice of R. Proposition 4.6. Under Assumption 4.5, if
R “ Ω ` n3{2φ´1{2 logpnδ´1 ´1q ˘ , m “ rΩ ` L22n12φ´4 ˘ ,
we have NTRF ď with probability at least 1´ δ.
Proposition 4.6 suggests that under Assumption 4.5, in order to guarantee NTRF ď , the size of NTRF function class needs to be Ωpn3{2q. Plugging this into Theorems 3.4 and 3.5 leads to vacuous bounds on the test error. This makes sense since Assumption 4.5 basically covers the “random label” setting, which is impossible to be learned with small generalization error. Moreover, we would like to point out our theoretical analysis leads to a sharper over-parameterization condition than that proved in Zou et al. (2019), i.e., m “ rΩ ` n14L16φ´4`n12L16φ´4 ´1 ˘
, if the network depth satisfies L ď rOpn1{3 _ ´1{6q.
5 PROOF SKETCH OF THE MAIN THEORY
In this section, we introduce a key technical lemma in Section 5.1, based on which we provide a proof sketch of Theorems 3.3. The full proof of all our results can be found in the appendix. ‡We have shown in the proof of Theorem 3.3 that m “ rΩpR8q (see (A.1) for more detail). §Specifically, Allen-Zhu et al. (2019b); Zou and Gu (2019) require that any two data points (rather than data points from different classes) are separated by a positive distance. Zou and Gu (2019) shows that this assumption is equivalent to those made in Du et al. (2019b;a), which require that the composite kernel matrix is strictly positive definite.
5.1 A KEY TECHNICAL LEMMA
Here we introduce a key technical lemma used in the proof of Theorem 3.3.
Our proof is based on the key observation that near initialization, the neural network function can be approximated by its first-order Taylor expansion. In the following, we first give the definition of the linear approximation error in a τ -neighborhood around initialization.
apppτq :“ sup i“1,...,n sup W1,WPBpWp0q,τq
ˇ ˇfW1pxiq ´ fWpxiq ´ x∇fWpxiq,W1 ´Wy ˇ ˇ.
If all the iterates of GD stay inside a neighborhood around initialization with small linear approximation error, then we may expect that the training of neural networks should be similar to the training of the corresponding linear model, where standard optimization techniques can be applied. Motivated by this, we also give the following definition on the gradient upper bound of neural networks around initialization, which is related to the Lipschitz constant of the optimization objective function.
Mpτq :“ sup i“1,...,n sup l“1,...,L sup WPBpWp0q,τq }∇WlfWpxiq}F .
By definition, we can choose W˚ P BpWp0q, Rm´1{2q such that n´1 řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. Then we have the following lemma. Lemma 5.1. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wptq P BpWp0q, τq for all 0 ď t ď t1 ´ 1. Then it holds that
1
t1
t1´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ` 2t1η NTRF t1η ` 3 2 ´ 4 apppτq ˘ .
Lemma 5.1 plays a central role in our proof. In specific, if Wptq P BpWp0q, τq for all t ď t1, then Lemma 5.1 implies that the average training loss is in the same order of NTRF as long as the linear approximation error apppτq is bounded by a positive constant. This is in contrast to the proof in Cao and Gu (2019), where apppτq appears as an additive term in the upper bound of the training loss, thus requiring apppτq “ Op NTRFq to achieve the same error bound as in Lemma 5.1. Since we can show that app “ rOpm´1{6q (See Section A.1), this suggests that m “ rΩp1q is sufficient to make the average training loss in the same order of NTRF.
Compared with the recent results for two-layer networks by Ji and Telgarsky (2020), Lemma 5.1 is proved with different techniques. In specific, the proof by Ji and Telgarsky (2020) relies on the 1-homogeneous property of the ReLU activation function, which limits their analysis to two-layer networks with fixed second layer weights. In comparison, our proof does not rely on homogeneity, and is purely based on the linear approximation property of neural networks and some specific properties of the loss function. Therefore, our proof technique can handle deep networks, and is potentially applicable to non-ReLU activation functions and other network architectures (e.g, Convolutional neural networks and Residual networks).
5.2 PROOF SKETCH OF THEOREM 3.3
Here we provide a proof sketch of Theorem 3.3. The proof consists of two steps: (i) showing that all T iterates stay close to initialization, and (ii) bounding the empirical loss achieved by gradient descent. Both of these steps are proved based on Lemma 5.1.
Proof sketch of Theorem 3.3. Recall that we choose W˚ P BpWp0q, Rm´1{2q such that n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. We set τ “ rOpL1{2m´1{2Rq, which is chosen slightly larger than m´1{2R since Lemma 5.1 requires the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . Then by Lemmas 4.1 and B.3 in Cao and Gu (2019) we know that apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set m “ rΩpR8L22q to ensure that apppτq ď 1{8.
Then we proceed to show that all iterates stay inside the region BpWp0q, τq. Since the L.H.S. of Lemma 5.1 is strictly positive and apppτq ď 1{8, we have for all t ď T ,
}Wp0q ´W˚}2F ´ }Wptq ´W˚}2F ě ´2tη NTRF,
which gives an upper bound of }Wptq ´W˚}F . Then by the choice of η, T , triangle inequality, and a simple induction argument, we see that }Wptq ´Wp0q}F ď m´1{2R ` ? 2Tη NTRF “ rOpL1{2m´1{2Rq, which verifies that Wptq P BpWp0q, τq for t “ 0, . . . , T ´ 1. The second step is to show that GD can find a neural network with at most 3 NTRF training loss within T iterations. To show this, by the bound given in Lemma 5.1 with app ď 1{8, we drop the terms }Wptq ´W˚}2F and rearrange the inequality to obtain
1
T
T´1 ÿ t“0 LSpWptqq ď 1 ηT }Wp0q ´W˚}2F ` 2 NTRF.
We see that T is large enough to ensure that the first term in the bound above is smaller than NTRF. This implies that the best iterate among Wp0q, . . . ,WpT´1q achieves an empirical loss at most 3 NTRF.
6 CONCLUSION
In this paper, we established the global convergence and generalization error bounds of GD and SGD for training deep ReLU networks for the binary classification problem. We show that a network width condition that is polylogarithmic in the sample size n and the inverse of target error ´1 is sufficient to guarantee the learning of deep ReLU networks. Our results resolve an open question raised in Ji and Telgarsky (2020).
ACKNOWLEDGEMENT
We would like to thank the anonymous reviewers for their helpful comments. ZC, YC and QG are partially supported by the National Science Foundation CAREER Award 1906169, IIS-2008981 and Salesforce Deep Learning Research Award. DZ is supported by the Bloomberg Data Science Ph.D. Fellowship. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
A PROOF OF MAIN THEOREMS
In this section we provide the full proof of Theorems 3.3, 3.4 and 3.5.
A.1 PROOF OF THEOREM 3.3
We first provide the following lemma which is useful in the subsequent proof. Lemma A.1 (Lemmas 4.1 and B.3 in Cao and Gu (2019)). There exists an absolute constant κ such that, with probability at least 1 ´ OpnL2q expr´Ωpmτ2{3Lqs, for any τ ď κL´6rlogpmqs´3{2, it holds that
apppτq ď rO ` τ4{3L3m1{2 ˘ , Mpτq ď rOp ? mq.
Proof of Theorem 3.3. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. Note that to apply Lemma 5.1, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q (A.1)
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ. Then by Lemma 5.1, we have with probability at least 1´ δ, we have
}Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ě η
t1´1 ÿ t“0 LSpWptqq ´ 2t1η NTRF (A.2)
as long as Wp0q, . . . ,Wpt 1´1q P BpWp0q, τq. In the following proof we choose η “ ΘpL´1m´1q and T “ rLR2m´1η´1 ´1NTRFs.
We prove the theorem by two steps: 1) we show that all iterates tWp0q, ¨ ¨ ¨ ,WpT qu will stay inside the region BpWp0q, τq; and 2) we show that GD can find a neural network with at most 3 NTRF training loss within T iterations.
All iterates stay inside BpWp0q, τq. We prove this part by induction. Specifically, given t1 ď T , we assume the hypothesis Wptq P BpWp0q, τq holds for all t ă t1 and prove that Wpt1q P BpWp0q, τq. First, it is clear that Wp0q P BpWp0q, τq. Then by (A.2) and the fact that LSpWq ě 0, we have
}Wpt 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2ηt1 NTRF
Note that T “ rLR2m´1η´1 ´1NTRFs and W˚ P BpWp0q, R ¨m´1{2q, we have L ÿ
l“1 }Wpt 1q l ´W ˚ l }2F “ }Wpt 1q ´W˚}2F ď CLR2m´1,
where C ě 4 is an absolute constant. Therefore, by triangle inequality, we further have the following for all l P rLs,
}Wpt 1q l ´W p0q l }F ď }W pt1q l ´W ˚ l }F ` }W p0q l ´W ˚ l }F
ď ? CLRm´1{2 `Rm´1{2 ď 2 ? CLRm´1{2. (A.3)
Therefore, it is clear that }Wpt 1q l ´W p0q l }F ď 2
? CLRm´1{2 ď τ based on our choice of τ
previously. This completes the proof of the first part.
Convergence of gradient descent. (A.2) implies
}Wp0q ´W˚}2F ´ }WpT q ´W˚}2F ě η ˆ T´1 ÿ
t“0 LSpWptqq ´ 2T NTRF
˙
.
Dividing by ηT on the both sides, we get
1
T
T´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ηT ` 2 NTRF ď LR2m´1 ηT ` 2 NTRF ď 3 NTRF,
where the second inequality is by the fact that W˚ P BpWp0q, R ¨ m´1{2q and the last inequality is by our choices of T and η which ensure that Tη ě LR2m´1 ´1NTRF. Notice that T “ rLR2m´1η´1 ´1NTRFs “ OpL2R2 ´1 NTRFq. This completes the proof of the second part, and we are able to complete the proof.
A.2 PROOF OF THEOREM 3.4
Following Cao and Gu (2020), we first introduce the definition of surrogate loss of the network, which is defined by the derivative of the loss function.
Definition A.2. We define the empirical surrogate error ESpWq and population surrogate error EDpWq as follows:
ESpWq :“ ´ 1
n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ , EDpWq :“ Epx,yq„D ´ `1 “ y ¨ fWpxq ‰( .
The following lemma gives uniform-convergence type of results for ESpWq utilizing the fact that ´`1p¨q is bounded and Lipschitz continuous.
Lemma A.3. For any rR, δ ą 0, suppose that m “ rΩpL12 rR2q ¨ rlogp1{δqs3{2. Then with probability at least 1´ δ, it holds that
|EDpWq ´ ESpWq| ď rO ˜ min # 4LL3{2 rR c m
n , L rR? n ` L
3 rR4{3
m1{6
+¸ `O ˜ c
logp1{δq n
¸
for all W P BpWp0q, rR ¨m´1{2q
We are now ready to prove Theorem 3.4, which combines the trajectory distance analysis in the proof of Theorem 3.3 with Lemma A.3.
Proof of Theorem 3.4. With exactly the same proof as Theorem 3.3, by (A.3) and induction we have Wp0q,Wp1q, . . . ,WpT q P BpWp0q, rRm´1{2q with rR “ Op ? LRq. Therefore by Lemma A.3, we have
|EDpWptqq ´ ESpWptqq| ď rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for all t “ 0, 1, . . . , T . Note that we have 1tz ă 0u ď ´2`1pzq. Therefore,
EL0´1D pW ptqq ď 2EDpWptqq
ď 2LSpWptqq ` rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for t “ 0, 1, . . . , T , where the last inequality is by ESpWq ď LSpWq because ´`1pzq ď `pzq for all z P R. This finishes the proof.
A.3 PROOF OF THEOREM 3.5
In this section we provide the full proof of Theorem 3.5. We first give the following result, which is the counterpart of Lemma 5.1 for SGD. Again we pick W˚ P BpWp0q, Rm´1{2q such that the loss of the corresponding NTRF model FWp0q,W˚pxq achieves NTRF. Lemma A.4. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wpn1q P BpWp0q, τq for all 0 ď n1 ď n´ 1. Then it holds that
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě
´3
2 ´ 4 apppτq
¯ η n1 ÿ
i“1 LipWpi´1qq ´ 2nη NTRF.
We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1ry ¨ fWpxqss, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Ji and Telgarsky, 2020). Our proof is based on the application of Lemma A.4 and an online-tobatch conversion argument (Cesa-Bianchi et al., 2004; Cao and Gu, 2019; Ji and Telgarsky, 2020). We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1py ¨ fWpxqqs, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Nitanda and Suzuki, 2019; Ji and Telgarsky, 2020).
Proof of Theorem 3.5. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. To apply Lemma A.4, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ.
Then by Lemma A.4, we have with probability at least 1´ δ,
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě η
n1 ÿ i“1 LipWpi´1qq ´ 2nη NTRF (A.4)
as long as Wp0q, . . . ,Wpn 1´1q P BpWp0q, τq.
We then prove Theorem 3.5 in two steps: 1) all iterates stay inside BpWp0q, τq; and 2) convergence of online SGD.
All iterates stay inside BpWp0q, τq. Similar to the proof of Theorem 3.3, we prove this part by induction. Assuming Wpiq satisfies Wpiq P BpWp0q, τq for all i ď n1 ´ 1, by (A.4), we have
}Wpn 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2nη NTRF
ď LR2 ¨m´1 ` 2nη NTRF,
where the last inequality is by W˚ P BpWp0q, Rm´1{2q. Then by triangle inequality, we further get
}Wpn 1q l ´W p0q l }F ď }W pn1q l ´W ˚ l }F ` }W˚l ´W p0q l }F
ď }Wpn 1q ´W˚}F ` }W˚l ´W p0q l }F ď Op ? LRm´1{2 `?nη NTRFq.
Then by our choices of η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ , we have }Wpn1q ´Wp0q}F ď 2 ? LRm´1{2 ď τ . This completes the proof of the first part.
Convergence of online SGD. By (A.4), we have
}Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ě η ˆ n ÿ
i“1 LipWpi´1qq ´ 2n NTRF
˙
.
Dividing by ηn on the both sides and rearranging terms, we get
1
n
n ÿ i“1 LipWpi´1qq ď }Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ηn ` 2 NTRF ď L2R2 n ` 3 NTRF,
where the second inequality follows from facts that W˚ P BpWp0q, R ¨m´1{2q and η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^L´1q ˘
. By Lemma 4.3 in (Ji and Telgarsky, 2020) and the fact that EipWpi´1qq ď LipWpi´1qq, we have
1
n
n ÿ i“1 L0´1D pW pi´1qq ď 2 n n ÿ i“1 EDpWpi´1qq
ď 8 n
n ÿ i“1 EipWpi´1qq ` 8 logp1{δq n
ď 8L 2R2 n ` 8 logp1{δq n ` 24 NTRF.
This completes the proof of the second part.
B PROOF OF RESULTS IN SECTION 4
B.1 PROOF OF PROPOSITION 4.2
We first provide the following lemma which gives an upper bound of the neural network output at the initialization. Lemma B.1 (Lemma 4.4 in Cao and Gu (2019)). Under Assumption 3.1, if m ě C̄L logpnL{δq with some absolute constant C̄, with probability at least 1´ δ, we have
|fWp0qpxiq| ď C a logpn{δq
for some absolute constant C.
Proof of Proposition 4.2. Under Assumption 4.1, we can find a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu with řL l“1 }U˚l }2F “ 1 such that yix∇fWp0qpxiq,U˚y ě m1{2γ for at least 1 ´ ρ fraction of the training data. By Lemma B.1, for all i P rns we have |fWp0qpxiq| ď C a
logpn{δq for some absolute constant C. Then for any positive constant λ, we have for at least 1´ ρ portion of the data,
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě m1{2λγ ´ C a logpn{δq.
For this fraction of data, we can set
λ “ C 1 “ log1{2pn{δq ` logp1{ q ‰
m1{2γ ,
where C 1 is an absolute constant, and get
m1{2λγ ´ C a logpn{δq ě logp1{ q.
Now we let W˚ “ Wp0q ` λU˚. By the choice of R in Proposition 4.2, we have W˚ P BpWp0q, R ¨ m´1{2q. The above inequality implies that for at least 1 ´ ρ fraction of data, we have ` `
yiFWp0q,W˚pxiq ˘ ď . For the rest data, we have
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě ´C a logpn{δq ´ λ}∇fWp0q}22 ě ´C1R
for some absolute positive constant C1, where the last inequality follows from fact that }∇fWp0q}2 “ rOpm1{2q (see Lemma A.1 for detail). Then note that we use cross-entropy loss, it follows that for this fraction of training data, we have ` `
yiFWp0q,W˚pxiq ˘ ď C2R for some constant C2. Combining the results of these two fractions of training data, we can conclude
NTRF ď n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď p1´ ρq ` ρ ¨OpRq
This completes the proof.
B.2 PROOF OF PROPOSITION 4.4
Proof of Proposition 4.4. We are going to prove that Assumption 4.3 implies the existence of a good function in the NTRF function class.
By Definition 3.2 and the definition of cross-entropy loss, our goal is to prove that there exists a collection of matrices W “ tW1,W2u satisfying maxt}W1 ´Wp0q1 }F , }W2 ´W p0q 2 }2u ď R ¨m´1{2 such that yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰
ě logp2{ q. We first consider∇W1fWp0qpxiq, which has the form
p∇W1fWp0qpxiq ˘ j “ m1{2 ¨ wp0q2,j ¨ σ 1pxwp0q1,j ,xiyq ¨ xi.
Note that wp0q2,j and w p0q 1,j are independently generated from N p0, 1{mq and N p0, 2I{mq respectively, thus we have Pp|wp0q2,j | ě 0.47m´1{2q ě 1{2. By Hoeffeding’s inequality, we know that with probability at least 1 ´ expp´m{8q, there are at least m{4 nodes, whose union is denoted by S, satisfying |wp0q2,j | ě 0.47m´1{2. Then we only focus on the nodes in the set S. Note that W p0q 1 and W p0q 2 are independently generated. Then by Assumption 4.3 and Hoeffeding’s inequality, there exists a function up¨q : Rd Ñ Rd such that with probability at least 1´ δ1,
1
|S| ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq ě γ ´
d
2 logp1{δ1q |S| .
Define vj “ upwp0q1,j q{w2,j if |w2,j | ě 0.47m´1{2 and vj “ 0 otherwise. Then we have m ÿ
j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq “ ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq
ě |S|γ ´ a 2|S| logp1{δ1q. Set δ “ 2nδ1 and apply union bound, we have with probability at least 1´ δ{2,
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ ´ a 2|S| logp2n{δq.
Therefore, note that with probability at least 1 ´ expp´m{8q, we have |S| ě m{4. Moreover, in Assumption 4.3, by yi P t˘1u and |σ1p¨q|, }up¨q}2, }xi}2 ď 1 for i “ 1, . . . , n, we see that γ ď 1. Then if m ě 32 logpn{δq{γ2, with probability at least 1´ δ{2´ exp ` ´ 4 logpn{δq{γ2 ˘
ě 1´ δ, m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ{2.
Let U “ pv1,v2, ¨ ¨ ¨ ,vmqJ{ a m|S|, we have
yix∇W1fWp0qpxiq,Uy “ 1 a
|S|
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě a |S|γ 2 ě m 1{2γ 4 ,
where the last inequality is by the fact that |S| ě m{4. Besides, note that by concentration and Gaussian tail bound, we have |fWp0qpxiq| ď C logpn{δq for some absolute constant C. Therefore, let W1 “Wp0q1 ` 4 ` logp2{ q ` C logpn{δq ˘ m´1{2U{γ and W2 “Wp0q2 , we have
yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰ ě logp2{ q. (B.1)
Note that }up¨q}2 ď 1, we have }U}F ď 1{0.47 ď 2.2. Therefore, we further have }W1 ´ W p0q 1 }F ď 8.8γ´1 ` logp2{ q ` C logpn{δq ˘
¨ m´1{2. This implies that W P BpWp0q, Rq with R “ O ` log ` n{pδ q ˘ {γ ˘
. Applying the inequality `plogp2{ qq ď on (B.1) gives `pyi ¨ FWp0q,Wpxiqq ď
for all i “ 1, . . . , n. This completes the proof.
B.3 PROOF OF PROPOSITION 4.6
Based on our theoretical analysis, the major goal is to show that there exist certain choices of R and m such that the best NTRF model in the function class FpWp0q, Rq can achieve training error. In this proof, we will prove a stronger results by showing that given the quantities of R and m specificed in Proposition 4.6, there exists a NTRF model with parameter W˚ that satisfies n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . In order to do so, we consider training the NTRF model via a different surrogate loss function. Specifically, we consider squared hinge loss r`pxq “ ` maxtλ´ x, 0u ˘2
, where λ denotes the target margin. In the later proof, we choose λ “ logp1{ q ` 1 such that the condition r`pxq ď 1 can guarantee that x ě logp q. Moreover, we consider using gradient flow, i.e., gradient descent with infinitesimal step size, to train the NTRF model. Therefore, in the remaining part of the proof, we consider optimizing the NTRF parameter W with the loss function
rLSpWq “ 1
n
n ÿ
i“1
r` ` yiFWp0q,Wpxiq ˘ .
Moreover, for simplicity, we only consider optimizing parameter in the last hidden layer (i.e., WL´1). Then the gradient flow can be formulated as
dWL´1ptq dt “ ´∇WL´1 rLSpWptqq, dWlptq dt “ 0 for any l ‰ L´ 1.
Note that the NTRF model is a linear model, thus by Definition 3.2, we have
∇WL´1 rLSpWptqq “ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇WL´1FWp0q,Wptqpxiq
“ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇ W p0q L´1 fWp0qpxiq. (B.2)
Then it is clear that∇WL´1 rLSpWptqq has fixed direction throughout the optimization. In order to prove the convergence of gradient flow and characterize the quantity of R, We first provide the following lemma which gives an upper bound of the NTRF model output at the initialization.
Then we provide the following lemma which characterizes a lower bound of the Frobenius norm of the partial gradient∇WL´1 rLSpWq.
Lemma B.2 (Lemma B.5 in Zou et al. (2019)). Under Assumptions 3.1 and 4.5, if m “ rΩpn2φ´1q, then for all t ě 0, with probability at least 1´ exp ` ´Opmφ{nq ˘
, there exist a positive constant C such that
}∇WL´1 rLSpWptqq}2F ě Cmφ
n5
„ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
2
.
We slightly modified the original version of this lemma since we use different models (we consider NTRF model while Zou et al. (2019) considers neural network model). However, by (B.2), it is clear that the gradient ∇rLSpWq can be regarded as a type of the gradient for neural network model at the initialization (i.e.,∇WL´1LSpWp0qq) is valid. Now we are ready to present the proof.
Proof of Proposition 4.6. Recall that we only consider training the last hidden weights, i.e., WL´1, via gradient flow with squared hinge loss, and our goal is to prove that gradient flow is able to find a NTRF model within the function class FpWp0q, Rq around the initialization, i.e., achieving n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Let Wptq be the weights at time t, gradient flow implies that
drLSpWptqq dt “ ´}∇WL´1 rLSpWptqq}2F ď ´ Cmφ n5
ˆ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
˙2
“ 4Cmφ rLSpWptqq n3 ,
where the first equality is due to the fact that we only train the last hidden layer, the first inequality is by Lemma B.2 and the second equality follows from the fact that r`1p¨q “ ´2 b
r`p¨q. Solving the above inequality gives
rLSpWptqq ď rLSpWp0qq ¨ exp ˆ ´ 4Cmφt n3 ˙ . (B.3)
Then, set T “ O ` n3m´1φ´1 ¨ logprLSpWp0qq{ 1q ˘ and 1 “ 1{n, we have rLSpWptqq ď 1. Then it follows that r` `
yiFWp0q,Wptqpxiq ˘ ď 1, which implies that yiFWp0q,Wptqpxiq ě logp q and thus n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Therefore, WpT q is exactly the NTRF model we are looking for.
The next step is to characterize the distance between WpT q and Wp0q in order to characterize the quantity of R. Note that }∇WL´1 rLSpWptqq}2F ě 4CmφrLSpWptqq{n3, we have
d
b
rLSpWptqq dt “ ´ }∇WL´1 rLSpWptqq}2F
2
b rLSpWptqq ď ´}∇WL´1 rLSpWptqq}F ¨
C1{2m1{2φ1{2
n3{2 .
Taking integral on both sides and rearranging terms, we have ż T
t“0 }∇WL´1 rLSpWptqq}Fdt ď
n3{2 C1{2m1{2φ1{2 ¨ ˆ b rLSpWp0qq ´ b rLSpWptqq ˙ .
Note that the L.H.S. of the above inequality is an upper bound of }Wptq ´Wp0q}F , we have for any t ě 0,
}Wptq ´Wp0q}F ď n3{2 C1{2m1{2φ1{2 ¨ b rLSpWp0qq “ O ˆ n3{2 log ` n{pδ q ˘ m1{2φ1{2 ˙ ,
where the second inequality is by Lemma B.1 and our choice of λ “ logp1{ q ` 1. This implies that there exists a point W˚ within the class FpWp0q, Rq with
R “ O ˆ n3{2 log ` n{pδ q ˘
φ1{2
˙
such that
NTRF :“ n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď .
Then by Theorem 3.3, and, more specifically, (A.1), we can compute the minimal required neural network width as follows,
m “ rΩpR8L22q “ rΩ ˜ L22n12
φ4
¸
.
This completes the proof.
C PROOF OF TECHNICAL LEMMAS
Here we provide the proof of Lemmas 5.1, A.3 and A.4.
C.1 PROOF OF LEMMA 5.1
The detailed proof of Lemma 5.1 is given as follows.
Proof of Lemma 5.1. Based on the update rule of gradient descent, i.e., Wpt`1q “ Wptq ´ η∇WLSpWptqq, we have the following calculation.
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
“ 2η n
n ÿ i“1 xWptq ´W˚,∇WLipWptqqy loooooooooooooooooooooomoooooooooooooooooooooon
I1
´ η2 L ÿ
l“1 }∇WlLSpWptqq}2F loooooooooooooomoooooooooooooon
I2
, (C.1)
where the equation follows from the fact that LSpWptqq “ n´1 řn i“1 LipWptqq. In what follows, we first bound the term I1 on the R.H.S. of (C.1) by approximating the neural network functions with linear models. By assumption, for t “ 0, . . . , t1 ´ 1, Wptq,W˚ P BpWp0q, τq. Therefore by the definition of apppτq,
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ fW˚pxiq ˘ ` apppτq (C.2)
Moreover, we also have
0 ď yi ¨ ` fW˚pxiq ´ fWp0qpxiq ´ x∇fWp0qpxiq,W˚ ´Wp0qy ˘ ` apppτq “ yi ¨ ` fW˚pxiq ´ FWp0q,W˚pxiq ˘ ` apppτq, (C.3)
where the equation follows by the definition of FWp0q,W˚pxq. Adding (C.3) to (C.2) and canceling the terms yi ¨ fW˚pxiq, we obtain that
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ FWp0q,W˚pxiq ˘ ` 2 apppτq. (C.4)
We can now give a lower bound on first term on the R.H.S. of (C.1). For i “ 1, . . . , n, applying the chain rule on the loss function gradients and utilizing (C.4), we have
xWptq ´W˚,∇WLipWptqqy “ `1 ` yifWptqpxiq ˘ ¨ yi ¨ xWptq ´W˚,∇WfWptqpxiqy ě `1 `
yifWptqpxiq ˘ ¨ ` yifWptqpxiq ´ yifW˚pxiq ` 2 apppτq ˘
ě p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘ , (C.5)
where the first inequality is by the fact that `1 ` yifWptqpxiq ˘ ă 0, the second inequality is by convexity of `p¨q and the fact that ´`1 `
yifWptqpxiq ˘ ď ` ` yifWptqpxiq ˘ .
We now proceed to bound the term I2 on the R.H.S. of (C.1). Note that we have `1p¨q ă 0, and therefore the Frobenius norm of the gradient∇WlLSpWptqq can be upper bounded as follows,
}∇WlLSpWptqq}F “ › › ›
›
1
n
n ÿ i“1 `1 ` yifWptqpxiq ˘ ∇WlfWptqpxiq › › › › F
ď 1 n
n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ¨ }∇WlfWptqpxiq}F ,
where the inequality follows by triangle inequality. We now utilize the fact that cross-entropy loss satisfies the inequalities ´`1p¨q ď `p¨q and ´`1p¨q ď 1. Therefore by definition of Mpτq, we have
L ÿ l“1 }∇WlLSpWptqq}2F ď O ` LMpτq2 ˘ ¨ ˆ 1 n n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ˙2
ď O ` LMpτq2 ˘ ¨ LSpWptqq. (C.6)
Then we can plug (C.5) and (C.6) into (C.1) and obtain
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
ě 2η n
n ÿ
i“1
”
p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘
ı
´O ` η2LMpτq2 ˘ ¨ LSpWptqq
ě „ 3
2 ´ 4 apppτq
ηLSpWptqq ´ 2η
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ ,
where the last inequality is by η “ OpL´1Mpτq´2q and merging the third term on the second line into the first term. Taking telescope sum from t “ 0 to t “ t1 ´ 1 and plugging in the definition 1 n řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF completes the proof.
C.2 PROOF OF LEMMA A.3
Proof of Lemma A.3. We first denote W “ BpWp0q, rR ¨ m´1{2q, and define the corresponding neural network function class and surrogate loss function class as F “ tfWpxq : W P Wu and G “ t´`ry ¨ fWpxqs : W PWu respectively. By standard uniform convergence results in terms of empirical Rademacher complexity (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), with probability at least 1´ δ we have
sup WPW |ESpWq ´ EDpWq| “ sup WPW
ˇ
ˇ
ˇ
ˇ
ˇ ´ 1 n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ ` Epx,yq„D`1 “ y ¨ fWpxq ‰
ˇ
ˇ
ˇ
ˇ
ˇ
ď 2pRnpGq ` C1
c
logp1{δq n ,
where C1 is an absolute constant, and
pRnpGq “ Eξi„Unifpt˘1uq
#
sup WPW
1
n
n ÿ i“1 ξi` 1“yi ¨ fWpxiq ‰
+
is the empirical Rademacher complexity of the function class G. We now provide two bounds on pRnpGq, whose combination gives the final result of Lemma A.3. First, by Corollary 5.35 in (Vershynin, 2010), with probability at least 1 ´ L ¨ expp´Ωpmqq, }Wp0ql }2 ď 3 for all l P rLs. Therefore for all W PW , we have }Wl}2 ď 4. Moreover, standard concentration inequalities on the norm of the first row of Wp0ql also implies that }Wl}2 ě 0.5 for all W PW and l P rLs. Therefore, an adaptation of the bound in (Bartlett et al., 2017)¶ gives
pRnpFq ď rO ˜
sup WPW
# m1{2? n ¨ « L ź
l“1 }Wl}2
ff ¨ « L ÿ
l“1
}WJl ´W p0qJ l } 2{3 2,1
}Wl}2{32
ff3{2+¸
ď rO ˜
sup WPW
#
4Lm1{2? n ¨ « L ÿ l“1 p ? m ¨ }WJl ´W p0qJ l }F q 2{3
ff3{2+¸
ď rO ˜ 4LL3{2 rR ¨ c m
n
¸
. (C.7)
We now derive the second bound on pRnpGq, which is inspired by the proof provided in (Cao and Gu, 2020). Since y P t`1, 1u, |`1pzq| ď 1 and `1pzq is 1-Lipschitz continuous, by standard empirical Rademacher complexity bounds (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), we have
pRnpGq ď pRnpFq “ Eξi„Unifpt˘1uq
«
sup WPW
1
n
n ÿ i“1 ξifWpxiq
ff
,
¶Bartlett et al. (2017) only proved the Rademacher complexity bound for the composition of the ramp loss and the neural network function. In our setting essentially the ramp loss is replaced with the ´`1p¨q function, which is bounded and 1-Lipschitz continuous. The proof in our setting is therefore exactly the same as the proof given in (Bartlett et al., 2017), and we can apply Theorem 3.3 and Lemma A.5 in (Bartlett et al., 2017) to obtain the desired bound we present here.
where pRnpFq is the empirical Rademacher complexity of the function class F . We have
pRnrFs ď Eξ
#
sup WPW
1
n
n ÿ i“1 ξi “ fWpxiq ´ FWp0q,Wpxiq ‰
+
looooooooooooooooooooooooooooomooooooooooooooooooooooooooooon
I1
`Eξ
#
sup WPW
1
n
n ÿ i“1 ξiFWp0q,Wpxiq
+
looo | 1. What are the main contributions and improvements of the paper regarding deep relu networks trained with gradient descent?
2. How does the new analysis in the paper differ from previous studies, particularly in terms of the "linearized" approximation and L2 norm of the model?
3. How do the dependencies of m and n on R or gamma compare to Ji & Telgarsky's work?
4. Why is there a gap in sample complexity between GD and SGD, and is this due to an artifact of the GD analysis in the generalization bound?
5. Is the factor m^1/2 in the definition of f_W standard, and how does this parameterization compare to previous works?
6. Would a comparison between the obtained results and a simple linear model in the class NTRF be useful to understand the cost of ensuring good linear approximation? | Review | Review
The paper studies optimization and generalization properties of deep relu networks trained with (stochastic) gradient descent on the logistic loss in the neural tangent kernel (NTK) regime. By using a new analysis that makes the "linearized" approximation as well as the L2 norm of the model in the approximate "random feature" kernel more explicit, the authors obtain results where the width only depends poly-logarithmically on the number of samples and 1/epsilon, for a test 0-1 loss of epsilon. This improves on previous analysis for deep networks, although it is similar to the two-layer result of Ji & Telgarsky.
I find the analysis interesting, more intuitive and practically relevant than previous studies, thanks to assumptions on the norm of the linearized model and the explicit bound on the approximation from linearization. The obtained guarantees are somewhat incremental are they are in large part similar to Ji & Telgarsky for the two-layer case, and remain in the NTK regime which only provides a limited picture of deep learning performance, but the extension to multiple layers and potentially other activations may be useful for the community, thus I remain on the accept side.
comments/questions:
how do the dependencies of m and n on R or gamma compare to Ji & Telgarsky? more discussions on this comparisons at the end of sections 3.1 and 3.2 would be welcome
the gap in sample complexity between GD and SGD is somewhat surprising (both the 1/eps^2 and the exponential dependency on L), is this just an artefact of the GD analysis in the generalization bound?
in section 2, is the factor m^1/2 in the definition of f_W standard? how does this parameterization compare to previous works? (it seems like it may compensate the m^{-1/2} from the initialization of the first layer, which is often not present in other works, but this should be further discussed)
in prop 4.4, it would be interesting to compare the obtained results to a simple linear model in the class NTRF, to see the cost of ensuring good linear approximation: does linear approximation require a larger m than what is needed just for good approximation of the infinite width kernel? |
ICLR | Title
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Abstract
A recent line of research on deep learning focuses on the extremely overparameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target error ́1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ́1. Our results push the study of over-parameterized deep neural networks towards more practical settings.
1 INTRODUCTION
Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications. However, the success of deep learning has not been well-explained in theory. It remains mysterious why standard optimization algorithms tend to find a globally optimal solution, despite the highly non-convex landscape of the training loss function. Moreover, despite the extremely large amount of parameters, deep neural networks rarely over-fit, and can often generalize well to unseen data and achieve good test accuracy. Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory.
Recent breakthroughs have shed light on the optimization and generalization of deep neural networks (DNNs) under the over-parameterized setting, where the hidden layer width is extremely large (much larger than the number of training examples). It has been shown that with the standard random initialization, the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel (NTK) (Jacot et al., 2018; Arora et al., 2019b). In the neural tangent kernel regime (or lazy training regime (Chizat et al., 2019)), the neural network function behaves similarly as its first-order Taylor expansion at initialization (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019b; Cao and Gu, 2019), which enables feasible optimization and generalization analysis. In terms of optimization, a line of work (Du et al., 2019b; Allen-Zhu et al., 2019b; Zou et al., 2019; Zou and Gu, 2019) proved that for sufficiently wide neural networks, (stochastic) gradient descent (GD/SGD) can successfully find a global optimum of the training loss function. For generalization, Allen-Zhu et al. (2019a); Arora et al. (2019a); Cao and Gu (2019) established generalization bounds of neural networks trained with (stochastic) gradient descent, and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space (RKHS) or the corresponding random feature function class.
Although existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks, they require the neural network to be extremely wide. ∗Equal contribution.
The typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1. As there still remains a huge gap between such network width requirement and the practice, many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization (Oymak and Soltanolkotabi, 2019; Zou and Gu, 2019; Kawaguchi and Huang, 2019; Bai and Lee, 2019). For two-layer ReLU networks, a recent work (Ji and Telgarsky, 2020) showed that when the training data are well separated, polylogarithmic width is sufficient to guarantee good optimization and generalization performances. However, their results cannot be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous, which cannot be satisfied by DNNs. Therefore, whether deep neural networks can be learned with such a mild over-parameterization is still an open problem.
In this paper, we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs. In particular, unlike the existing works that require the DNNs to behave very close to a linear model (up to some small approximation error), we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs. Thanks to the relaxed requirement on the linear approximation error, a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved. We summarize our contributions as follows:
• We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class (Cao and Gu, 2019), a set of linear functions over random features. Specifically, we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class, where R is the radius of the NTRF function class.
• We also establish the generalization guarantees for both GD and SGD in the same setting. Specifically, we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q, while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover, we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively, which are tighter than existing bounds for learning deep ReLU networks (Cao and Gu, 2019), and match the best results when reduced to the two-layer cases (Arora et al., 2019b; Ji and Telgarsky, 2020).
• We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature. We show if a large fraction of the training data are well separated, the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q, which has a logarithmic dependence on the target error and sample size n. Compared with existing results (Cao and Gu, 2020; Ji and Telgarsky, 2020) which require all training data points to be separated in the NTK regime, our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data.
For the ease of comparison, we summarize our results along with the most related previous results in Table 1, in terms of data assumption, the over-parameterization condition and sample complexity. It can be seen that under data separation assumption (See Sections 4.1, 4.2), our result improves existing results for learning deep neural networks by only requiring a polylogpn, ´1q network width. Notation. For two scalars a and b, we denote a^ b “ minta, bu. For a vector x P Rd we use }x}2 to denote its Euclidean norm. For a matrix X, we use }X}2 and }X}F to denote its spectral norm and Frobenius norm respectively, and denote by Xij the entry of X at the i-th row and j-th column. Given two matrices X and Y with the same dimension, we denote xX,Yy “ ř
i,jXijYij .
Given a collection of matrices W “ tW1, ¨ ¨ ¨ ,WLu P bLl“1Rmlˆm 1 l and a function fpWq over bLl“1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl“1. We also denote BpW, τq “ W1 : maxlPrLs }W1l ´Wl}F ď τ ( for τ ě 0. For two collection of matrices A “ tA1, ¨ ¨ ¨ ,Anu, B “ tB1, ¨ ¨ ¨ ,Bnu, we denote xA,By “
řn i“1xAi,Biy and }A}2F “ řn i“1 }Ai}2F .
Algorithm 1 Gradient descent with random initialization Input: Number of iterations T , step size η, training set S “ tpxi, yiqni“1u, initialization Wp0q for t “ 1, 2, . . . , T do
Update Wptq “Wpt´1q ´ η ¨∇WLSpWpt´1qq. end for Output: Wp0q, . . . ,WpT q.
Given two sequences txnu and tynu, we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1, xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2, and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3, C4 ą 0. We also use rOp¨q, rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively. Additionally, we denote xn “ polypynq if xn “ OpyDn q for some positive constant D, and xn “ polylogpynq if xn “ polyplogpynqq.
2 PRELIMINARIES ON LEARNING NEURAL NETWORKS
In this section, we introduce the problem setting in this paper, including definitions of the neural network and loss functions, and the training algorithms, i.e., GD and SGD with random initialization.
Neural network function. Given an input x P Rd, the output of deep fully-connected ReLU network is defined as follows,
fWpxq “ m1{2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q, where W1 P Rmˆd, W2, ¨ ¨ ¨ ,WL´1 P Rmˆm, WL P R1ˆm, and σpxq “ maxt0, xu is the ReLU activation function. Here, without loss of generality, we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers, as long as the smallest width satisfies our overparameterization condition. We denote the collection of all weight matrices as W “ tW1, . . . ,WLu. Loss function. Given training dataset txi, yiui“1,...,n with input xi P Rd and output yi P t´1,`1u, we define the training loss function as
LSpWq “ 1
n
n ÿ i“1 LipWq,
where LipWq “ ` ` yifWpxiq ˘ “ log ` 1` expp´yifWpxiqq ˘ is defined as the cross-entropy loss.
Algorithms. We consider both GD and SGD with Gaussian random initialization. These two algorithms are displayed in Algorithms 1 and 2 respectively. Specifically, the entries in Wp0q1 , ¨ ¨ ¨ ,W p0q L´1 are generated independently from univariate Gaussian distributionNp0, 2{mq and the entries in Wp0qL are generated independently from Np0, 1{mq. For GD, we consider using the full gradient to update the model parameters. For SGD, we use a new training data point in each iteration.
Note that our initialization method in Algorithms 1, 2 is the same as the widely used He initialization (He et al., 2015). Our neural network parameterization is also consistent with the parameterization used in prior work on NTK (Jacot et al., 2018; Allen-Zhu et al., 2019b; Du et al., 2019a; Arora et al., 2019b; Cao and Gu, 2019).
Algorithm 2 Stochastic gradient desecent (SGD) with random initialization Input: Number of iterations n, step size η, initialization Wp0q for i “ 1, 2, . . . , n do
Draw pxi, yiq from D and compute the corresponding gradient∇WLipWpi´1qq. Update Wpiq “Wpi´1q ´ η ¨∇WLipWpi´1qq.
end for Output: Randomly choose xW uniformly from tWp0q, . . . ,Wpn´1qu.
3 MAIN THEORY
In this section, we present the optimization and generalization guarantees of GD and SGD for learning deep ReLU networks. We first make the following assumption on the training data points. Assumption 3.1. All training data points satisfy }xi}2 “ 1, i “ 1, . . . , n.
This assumption has been widely made in many previous works (Allen-Zhu et al., 2019b;c; Du et al., 2019b;a; Zou et al., 2019) in order to simplify the theoretical analysis. This assumption can be relaxed to be upper bounded and lower bounded by some constant.
In the following, we give the definition of Neural Tangent Random Feature (NTRF) (Cao and Gu, 2019), which characterizes the functions learnable by over-parameterized ReLU networks.
Definition 3.2 (Neural Tangent Random Feature, (Cao and Gu, 2019)). Let Wp0q be the initialization weights, and FWp0q,Wpxq “ fWp0qpxq ` x∇fWp0qpxq,W ´Wp0qy be a function with respect to the input x. Then the NTRF function class is defined as follows
FpWp0q, Rq “ FWp0q,Wp¨q : W P BpWp0q, R ¨m´1{2q ( .
The function class FWp0q,Wpxq consists of linear models over random features defined based on the network gradients at the initialization. Therefore it captures the key “almost linear” property of wide neural networks in the NTK regime (Lee et al., 2019; Cao and Gu, 2019). In this paper, we use the NTRF function class as a reference class to measure the difficulty of a learning problem. In what follows, we deliver our main theoretical results regarding the optimization and generalization guarantees of learning deep ReLU networks. We study both GD and SGD with random initialization (presented in Algorithms 1 and 2).
3.1 GRADIENT DESCENT
The following theorem establishes the optimization guarantee of GD for training deep ReLU networks for binary classification. Theorem 3.3. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ over the initialization, GD with step size η “ ΘpL´1m´1q can train a neural network to achieve at most 3 NTRF training loss within T “ O `
L2R2 ´1NTRF ˘ iterations.
Theorem 3.3 shows that the deep ReLU network trained by GD can compete with the best function in the NTRF function class FpWp0q, Rq if the network width has a polynomial dependency in R and L and a logarithmic dependency in n and 1{δ. Moreover, if the NTRF function class with R “ rOp1q can learn the training data well (i.e., NTRF is less than a small target error ), a polylogarithmic (in terms of n and ´1) network width suffices to guarantee the global convergence of GD, which directly improves over-paramterization condition in the most related work (Cao and Gu, 2019). Besides, we remark here that this assumption on the NTRF function class can be easily satisfied when the training data admits certain separability conditions, which we discuss in detail in Section 4.
Compared with the results in (Ji and Telgarsky, 2020) which give similar network width requirements for two-layer networks, our result works for deep networks. Moreover, while Ji and Telgarsky (2020)
essentially required all training data to be separable by a function in the NTRF function class with a constant margin, our result does not require such data separation assumptions, and allows the NTRF function class to misclassify a small proportion of the training data points∗.
We now characterize the generalization performance of neural networks trained by GD. We denote L0´1D pWq “ Epx,yq„Dr1tfWpxq ¨ y ă 0us as the expected 0-1 loss (i.e., expected error) of fWpxq. Theorem 3.4. Under the same assumptions as Theorem 3.3, with probability at least 1´δ, the iterate Wptq of Algorithm 1 satisfies that
L0´1D pW ptqq ď 2LSpWptqq ` rO
˜
4LL2R
c
m n ^ ˜ L3{2R? n ` L 11{3R4{3 m1{6 ¸¸ `O ˜ c logp1{δq n ¸
for all t “ 0, . . . , T .
Theorem 3.4 shows that the test error of the trained neural network can be bounded by its training error plus statistical error terms. Note that the statistical error terms is in the form of a minimum between two terms 4LL2R a m{n and L3{2R{ ? n` L11{3R4{3{m1{6. Depending on the network width m, one of these two terms will be the dominating term and diminishes for large n: (1) if m “ opnq, the statistical error will be 4LL2R a
m{n, and diminishes as n increases; and (2) if m “ Ωpnq, the statistical error is L3{2R{ ? n` L11{3R4{3{m1{6, and again goes to zero as n increases. Moreover,
in this paper we have a specific focus on the setting m “ rOp1q, under which Theorem 3.4 gives a statistical error of order rOpn´1{2q. This distinguishes our result from previous generalization bounds for deep networks (Cao and Gu, 2020; 2019), which cannot be applied to the setting m “ rOp1q. We note that for two-layer ReLU networks (i.e., L “ 2) Ji and Telgarsky (2020) proves a tighter rOp1{n1{2q generalization error bound regardless of the neural networks width m, while our result (Theorem 3.4), in the two-layer case, can only give rOp1{n1{2q generalization error bound when m “ rOp1q or m “ rΩpn3q. However, different from our proof technique that basically uses the (approximated) linearity of the neural network function, their proof technique largely relies on the 1-homogeneous property of the neural network, which restricted their theory in two-layer cases. An interesting research direction is to explore whether a rOp1{n1{2q generalization error bound can be also established for deep networks (regardless of the network width), which we will leave it as a future work.
3.2 STOCHASTIC GRADIENT DESCENT
Here we study the performance of SGD for training deep ReLU networks. The following theorem establishes a generalization error bound for the output of SGD. Theorem 3.5. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ, SGD with step size η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ achieves
ErL0´1D pxWqs ď 8L2R2 n ` 8 logp2{δq n ` 24 NTRF,
where the expectation is taken over the uniform draw of xW from tWp0q, . . . ,Wpn´1qu.
For any ą 0, Theorem 3.5 gives a rOp ´1q sample complexity for deep ReLU networks trained with SGD to achieveOp NTRF` q test error. Our result extends the result for two-layer networks proved in (Ji and Telgarsky, 2020) to multi-layer networks. Theorem 3.5 also provides sharper results compared with Allen-Zhu et al. (2019a); Cao and Gu (2019) in two aspects: (1) the sample complexity is improved from n “ rOp ´2q to n “ rOp ´1q; and (2) the overparamterization condition is improved from m ě polyp ´1q to m “ rΩp1q. ∗A detailed discussion is given in Section 4.2.
4 DISCUSSION ON THE NTRF CLASS
Our theoretical results in Section 3 rely on the radius (i.e.,R) of the NTRF function classFpWp0q, Rq and the minimum training loss achievable by functions in FpWp0q, Rq, i.e., NTRF. Note that a larger R naturally implies a smaller NTRF, but also leads to worse conditions on m. In this section, for any (arbitrarily small) target error rate ą 0, we discuss various data assumptions studied in the literature under which our results can lead to Op q training/test errors, and specify the network width requirement.
4.1 DATA SEPARABILITY BY NEURAL TANGENT RANDOM FEATURE
In this subsection, we consider the setting where a large fraction of the training data can be linearly separated by the neural tangent random features. The assumption is stated as follows.
Assumption 4.1. There exists a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu satisfying řL l“1 }U˚l }2F “ 1, such that for at least p1´ ρq fraction of training data we have
yix∇fWp0qpxiq,U˚y ě m1{2γ,
where γ is an absolute positive constant† and ρ P r0, 1q.
The following corollary provides an upper bound of NTRF under Assumption 4.1 for some R.
Proposition 4.2. Under Assumption 4.1, for any , δ ą 0, if R ě C “ log1{2pn{δq ` logp1{ q ‰
{γ for some absolute constant C, then with probability at least 1´ δ,
NTRF :“ inf FPFpWp0q,Rq
n´1 n ÿ
i“1 ` ` yiF pxiq ˘ ď ` ρ ¨OpRq.
Proposition 4.2 covers the setting where the NTRF function class is allowed to misclassify training data, while most of existing work typically assumes that all training data can be perfectly separated with constant margin (i.e., ρ “ 0) (Ji and Telgarsky, 2020; Shamir, 2020). Our results show that for sufficiently small misclassification ratio ρ “ Op q, we have NTRF “ rOp q by choosing the radius parameter R logarithimic in n, δ´1, and ´1. Substituting this result into Theorems 3.3, 3.4 and 3.5, it can be shown that a neural network with width m “ polypL, logpn{δq, logp1{ qq ˘ suffices to guarantee good optimization and generalization performances for both GD and SGD. Consequently, we can obtain that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.2 DATA SEPARABILITY BY SHALLOW NEURAL TANGENT MODEL
In this subsection, we study the data separation assumption made in Ji and Telgarsky (2020) and show that our results cover this particular setting. We first restate the assumption as follows.
Assumption 4.3. There exists up¨q : Rd Ñ Rd and γ ě 0 such that }upzq}2 ď 1 for all z P Rd, and
yi
ż
Rd σ1pxz,xiyq ¨ xupzq,xiydµNpzq ě γ
for all i P rns, where µN p¨q denotes the standard normal distribution.
Assumption 4.3 is related to the linear separability of the gradients of the first layer parameters at random initialization, where the randomness is replaced with an integral by taking the infinite width limit. Note that similar assumptions have also been studied in (Cao and Gu, 2020; Nitanda and Suzuki, 2019; Frei et al., 2019). The assumption made in (Cao and Gu, 2020; Frei et al., 2019) uses gradients with respect to the second layer weights instead of the first layer ones. In the following, we mainly focus on Assumption 4.3, while our result can also be generalized to cover the setting in (Cao and Gu, 2020; Frei et al., 2019).
†The factor m1{2 is introduced here since }∇Wp0qfpxiq}F is typically of order Opm 1{2q.
In order to make a fair comparison, we reduce our results for multilayer networks to the two-layer setting. In this case, the neural network function takes form
fWpxq “ m1{2W2σpW1xq. Then we provide the following proposition, which states that Assumption 4.3 implies a certain choice of R “ rOp1q such the the minimum training loss achieved by the function in the NTRF function class FpWp0q, Rq satisfies NTRF “ Op q, where is the target error. Proposition 4.4. Suppose the training data satisfies Assumption 4.3. For any , δ ą 0, let R “ C “ logpn{δq ` logp1{ q ‰
{γ for some large enough absolute constant C. If the neural network width satisfies m “ Ω ` logpn{δq{γ2 ˘
, then with probability at least 1 ´ δ, there exist FWp0q,Wpxiq P FpWp0q, Rq such that ` `
yi ¨ FWp0q,Wpxiq ˘ ď ,@i P rns.
Proposition 4.4 shows that under Assumption 4.3, there exists FWp0q,Wp¨q P FpWp0q, Rq with R “ rOp1{γq such that the cross-entropy loss of FWp0q,Wp¨q at each training data point is bounded by . This implies that NTRF ď . Moreover, by applying Theorem 3.3 with L “ 2, the condition on the neural network width becomes m “ rΩp1{γ8q‡, which matches the results proved in Ji and Telgarsky (2020). Moreover, plugging these results on m and NTRF into Theorems 3.4 and 3.5, we can conclude that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.3 CLASS-DEPENDENT DATA NONDEGENERATION
In previous subsections, we have shown that under certain data separation conditions NTRF can be sufficiently small while the corresponding NTRF function class has R of order rOp1q. Thus neural networks with polylogarithmic width enjoy nice optimization and generalization guarantees. In this part, we consider the following much milder data separability assumption made in Zou et al. (2019). Assumption 4.5. For all i ‰ i1 if yi ‰ yi1 , then }xi ´ xj}2 ě φ for some absolute constant φ.
In contrast to the conventional data nondegeneration assumption (i.e., no duplicate data points) made in Allen-Zhu et al. (2019b); Du et al. (2019b;a); Zou and Gu (2019)§, Assumption 4.5 only requires that the data points from different classes are nondegenerate, thus we call it class-dependent data nondegeneration.
We have the following proposition which shows that Assumption 4.5 also implies the existence of a good function that achieves training error, in the NTRF function class with a certain choice of R. Proposition 4.6. Under Assumption 4.5, if
R “ Ω ` n3{2φ´1{2 logpnδ´1 ´1q ˘ , m “ rΩ ` L22n12φ´4 ˘ ,
we have NTRF ď with probability at least 1´ δ.
Proposition 4.6 suggests that under Assumption 4.5, in order to guarantee NTRF ď , the size of NTRF function class needs to be Ωpn3{2q. Plugging this into Theorems 3.4 and 3.5 leads to vacuous bounds on the test error. This makes sense since Assumption 4.5 basically covers the “random label” setting, which is impossible to be learned with small generalization error. Moreover, we would like to point out our theoretical analysis leads to a sharper over-parameterization condition than that proved in Zou et al. (2019), i.e., m “ rΩ ` n14L16φ´4`n12L16φ´4 ´1 ˘
, if the network depth satisfies L ď rOpn1{3 _ ´1{6q.
5 PROOF SKETCH OF THE MAIN THEORY
In this section, we introduce a key technical lemma in Section 5.1, based on which we provide a proof sketch of Theorems 3.3. The full proof of all our results can be found in the appendix. ‡We have shown in the proof of Theorem 3.3 that m “ rΩpR8q (see (A.1) for more detail). §Specifically, Allen-Zhu et al. (2019b); Zou and Gu (2019) require that any two data points (rather than data points from different classes) are separated by a positive distance. Zou and Gu (2019) shows that this assumption is equivalent to those made in Du et al. (2019b;a), which require that the composite kernel matrix is strictly positive definite.
5.1 A KEY TECHNICAL LEMMA
Here we introduce a key technical lemma used in the proof of Theorem 3.3.
Our proof is based on the key observation that near initialization, the neural network function can be approximated by its first-order Taylor expansion. In the following, we first give the definition of the linear approximation error in a τ -neighborhood around initialization.
apppτq :“ sup i“1,...,n sup W1,WPBpWp0q,τq
ˇ ˇfW1pxiq ´ fWpxiq ´ x∇fWpxiq,W1 ´Wy ˇ ˇ.
If all the iterates of GD stay inside a neighborhood around initialization with small linear approximation error, then we may expect that the training of neural networks should be similar to the training of the corresponding linear model, where standard optimization techniques can be applied. Motivated by this, we also give the following definition on the gradient upper bound of neural networks around initialization, which is related to the Lipschitz constant of the optimization objective function.
Mpτq :“ sup i“1,...,n sup l“1,...,L sup WPBpWp0q,τq }∇WlfWpxiq}F .
By definition, we can choose W˚ P BpWp0q, Rm´1{2q such that n´1 řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. Then we have the following lemma. Lemma 5.1. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wptq P BpWp0q, τq for all 0 ď t ď t1 ´ 1. Then it holds that
1
t1
t1´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ` 2t1η NTRF t1η ` 3 2 ´ 4 apppτq ˘ .
Lemma 5.1 plays a central role in our proof. In specific, if Wptq P BpWp0q, τq for all t ď t1, then Lemma 5.1 implies that the average training loss is in the same order of NTRF as long as the linear approximation error apppτq is bounded by a positive constant. This is in contrast to the proof in Cao and Gu (2019), where apppτq appears as an additive term in the upper bound of the training loss, thus requiring apppτq “ Op NTRFq to achieve the same error bound as in Lemma 5.1. Since we can show that app “ rOpm´1{6q (See Section A.1), this suggests that m “ rΩp1q is sufficient to make the average training loss in the same order of NTRF.
Compared with the recent results for two-layer networks by Ji and Telgarsky (2020), Lemma 5.1 is proved with different techniques. In specific, the proof by Ji and Telgarsky (2020) relies on the 1-homogeneous property of the ReLU activation function, which limits their analysis to two-layer networks with fixed second layer weights. In comparison, our proof does not rely on homogeneity, and is purely based on the linear approximation property of neural networks and some specific properties of the loss function. Therefore, our proof technique can handle deep networks, and is potentially applicable to non-ReLU activation functions and other network architectures (e.g, Convolutional neural networks and Residual networks).
5.2 PROOF SKETCH OF THEOREM 3.3
Here we provide a proof sketch of Theorem 3.3. The proof consists of two steps: (i) showing that all T iterates stay close to initialization, and (ii) bounding the empirical loss achieved by gradient descent. Both of these steps are proved based on Lemma 5.1.
Proof sketch of Theorem 3.3. Recall that we choose W˚ P BpWp0q, Rm´1{2q such that n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. We set τ “ rOpL1{2m´1{2Rq, which is chosen slightly larger than m´1{2R since Lemma 5.1 requires the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . Then by Lemmas 4.1 and B.3 in Cao and Gu (2019) we know that apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set m “ rΩpR8L22q to ensure that apppτq ď 1{8.
Then we proceed to show that all iterates stay inside the region BpWp0q, τq. Since the L.H.S. of Lemma 5.1 is strictly positive and apppτq ď 1{8, we have for all t ď T ,
}Wp0q ´W˚}2F ´ }Wptq ´W˚}2F ě ´2tη NTRF,
which gives an upper bound of }Wptq ´W˚}F . Then by the choice of η, T , triangle inequality, and a simple induction argument, we see that }Wptq ´Wp0q}F ď m´1{2R ` ? 2Tη NTRF “ rOpL1{2m´1{2Rq, which verifies that Wptq P BpWp0q, τq for t “ 0, . . . , T ´ 1. The second step is to show that GD can find a neural network with at most 3 NTRF training loss within T iterations. To show this, by the bound given in Lemma 5.1 with app ď 1{8, we drop the terms }Wptq ´W˚}2F and rearrange the inequality to obtain
1
T
T´1 ÿ t“0 LSpWptqq ď 1 ηT }Wp0q ´W˚}2F ` 2 NTRF.
We see that T is large enough to ensure that the first term in the bound above is smaller than NTRF. This implies that the best iterate among Wp0q, . . . ,WpT´1q achieves an empirical loss at most 3 NTRF.
6 CONCLUSION
In this paper, we established the global convergence and generalization error bounds of GD and SGD for training deep ReLU networks for the binary classification problem. We show that a network width condition that is polylogarithmic in the sample size n and the inverse of target error ´1 is sufficient to guarantee the learning of deep ReLU networks. Our results resolve an open question raised in Ji and Telgarsky (2020).
ACKNOWLEDGEMENT
We would like to thank the anonymous reviewers for their helpful comments. ZC, YC and QG are partially supported by the National Science Foundation CAREER Award 1906169, IIS-2008981 and Salesforce Deep Learning Research Award. DZ is supported by the Bloomberg Data Science Ph.D. Fellowship. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
A PROOF OF MAIN THEOREMS
In this section we provide the full proof of Theorems 3.3, 3.4 and 3.5.
A.1 PROOF OF THEOREM 3.3
We first provide the following lemma which is useful in the subsequent proof. Lemma A.1 (Lemmas 4.1 and B.3 in Cao and Gu (2019)). There exists an absolute constant κ such that, with probability at least 1 ´ OpnL2q expr´Ωpmτ2{3Lqs, for any τ ď κL´6rlogpmqs´3{2, it holds that
apppτq ď rO ` τ4{3L3m1{2 ˘ , Mpτq ď rOp ? mq.
Proof of Theorem 3.3. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. Note that to apply Lemma 5.1, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q (A.1)
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ. Then by Lemma 5.1, we have with probability at least 1´ δ, we have
}Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ě η
t1´1 ÿ t“0 LSpWptqq ´ 2t1η NTRF (A.2)
as long as Wp0q, . . . ,Wpt 1´1q P BpWp0q, τq. In the following proof we choose η “ ΘpL´1m´1q and T “ rLR2m´1η´1 ´1NTRFs.
We prove the theorem by two steps: 1) we show that all iterates tWp0q, ¨ ¨ ¨ ,WpT qu will stay inside the region BpWp0q, τq; and 2) we show that GD can find a neural network with at most 3 NTRF training loss within T iterations.
All iterates stay inside BpWp0q, τq. We prove this part by induction. Specifically, given t1 ď T , we assume the hypothesis Wptq P BpWp0q, τq holds for all t ă t1 and prove that Wpt1q P BpWp0q, τq. First, it is clear that Wp0q P BpWp0q, τq. Then by (A.2) and the fact that LSpWq ě 0, we have
}Wpt 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2ηt1 NTRF
Note that T “ rLR2m´1η´1 ´1NTRFs and W˚ P BpWp0q, R ¨m´1{2q, we have L ÿ
l“1 }Wpt 1q l ´W ˚ l }2F “ }Wpt 1q ´W˚}2F ď CLR2m´1,
where C ě 4 is an absolute constant. Therefore, by triangle inequality, we further have the following for all l P rLs,
}Wpt 1q l ´W p0q l }F ď }W pt1q l ´W ˚ l }F ` }W p0q l ´W ˚ l }F
ď ? CLRm´1{2 `Rm´1{2 ď 2 ? CLRm´1{2. (A.3)
Therefore, it is clear that }Wpt 1q l ´W p0q l }F ď 2
? CLRm´1{2 ď τ based on our choice of τ
previously. This completes the proof of the first part.
Convergence of gradient descent. (A.2) implies
}Wp0q ´W˚}2F ´ }WpT q ´W˚}2F ě η ˆ T´1 ÿ
t“0 LSpWptqq ´ 2T NTRF
˙
.
Dividing by ηT on the both sides, we get
1
T
T´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ηT ` 2 NTRF ď LR2m´1 ηT ` 2 NTRF ď 3 NTRF,
where the second inequality is by the fact that W˚ P BpWp0q, R ¨ m´1{2q and the last inequality is by our choices of T and η which ensure that Tη ě LR2m´1 ´1NTRF. Notice that T “ rLR2m´1η´1 ´1NTRFs “ OpL2R2 ´1 NTRFq. This completes the proof of the second part, and we are able to complete the proof.
A.2 PROOF OF THEOREM 3.4
Following Cao and Gu (2020), we first introduce the definition of surrogate loss of the network, which is defined by the derivative of the loss function.
Definition A.2. We define the empirical surrogate error ESpWq and population surrogate error EDpWq as follows:
ESpWq :“ ´ 1
n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ , EDpWq :“ Epx,yq„D ´ `1 “ y ¨ fWpxq ‰( .
The following lemma gives uniform-convergence type of results for ESpWq utilizing the fact that ´`1p¨q is bounded and Lipschitz continuous.
Lemma A.3. For any rR, δ ą 0, suppose that m “ rΩpL12 rR2q ¨ rlogp1{δqs3{2. Then with probability at least 1´ δ, it holds that
|EDpWq ´ ESpWq| ď rO ˜ min # 4LL3{2 rR c m
n , L rR? n ` L
3 rR4{3
m1{6
+¸ `O ˜ c
logp1{δq n
¸
for all W P BpWp0q, rR ¨m´1{2q
We are now ready to prove Theorem 3.4, which combines the trajectory distance analysis in the proof of Theorem 3.3 with Lemma A.3.
Proof of Theorem 3.4. With exactly the same proof as Theorem 3.3, by (A.3) and induction we have Wp0q,Wp1q, . . . ,WpT q P BpWp0q, rRm´1{2q with rR “ Op ? LRq. Therefore by Lemma A.3, we have
|EDpWptqq ´ ESpWptqq| ď rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for all t “ 0, 1, . . . , T . Note that we have 1tz ă 0u ď ´2`1pzq. Therefore,
EL0´1D pW ptqq ď 2EDpWptqq
ď 2LSpWptqq ` rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for t “ 0, 1, . . . , T , where the last inequality is by ESpWq ď LSpWq because ´`1pzq ď `pzq for all z P R. This finishes the proof.
A.3 PROOF OF THEOREM 3.5
In this section we provide the full proof of Theorem 3.5. We first give the following result, which is the counterpart of Lemma 5.1 for SGD. Again we pick W˚ P BpWp0q, Rm´1{2q such that the loss of the corresponding NTRF model FWp0q,W˚pxq achieves NTRF. Lemma A.4. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wpn1q P BpWp0q, τq for all 0 ď n1 ď n´ 1. Then it holds that
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě
´3
2 ´ 4 apppτq
¯ η n1 ÿ
i“1 LipWpi´1qq ´ 2nη NTRF.
We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1ry ¨ fWpxqss, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Ji and Telgarsky, 2020). Our proof is based on the application of Lemma A.4 and an online-tobatch conversion argument (Cesa-Bianchi et al., 2004; Cao and Gu, 2019; Ji and Telgarsky, 2020). We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1py ¨ fWpxqqs, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Nitanda and Suzuki, 2019; Ji and Telgarsky, 2020).
Proof of Theorem 3.5. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. To apply Lemma A.4, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ.
Then by Lemma A.4, we have with probability at least 1´ δ,
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě η
n1 ÿ i“1 LipWpi´1qq ´ 2nη NTRF (A.4)
as long as Wp0q, . . . ,Wpn 1´1q P BpWp0q, τq.
We then prove Theorem 3.5 in two steps: 1) all iterates stay inside BpWp0q, τq; and 2) convergence of online SGD.
All iterates stay inside BpWp0q, τq. Similar to the proof of Theorem 3.3, we prove this part by induction. Assuming Wpiq satisfies Wpiq P BpWp0q, τq for all i ď n1 ´ 1, by (A.4), we have
}Wpn 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2nη NTRF
ď LR2 ¨m´1 ` 2nη NTRF,
where the last inequality is by W˚ P BpWp0q, Rm´1{2q. Then by triangle inequality, we further get
}Wpn 1q l ´W p0q l }F ď }W pn1q l ´W ˚ l }F ` }W˚l ´W p0q l }F
ď }Wpn 1q ´W˚}F ` }W˚l ´W p0q l }F ď Op ? LRm´1{2 `?nη NTRFq.
Then by our choices of η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ , we have }Wpn1q ´Wp0q}F ď 2 ? LRm´1{2 ď τ . This completes the proof of the first part.
Convergence of online SGD. By (A.4), we have
}Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ě η ˆ n ÿ
i“1 LipWpi´1qq ´ 2n NTRF
˙
.
Dividing by ηn on the both sides and rearranging terms, we get
1
n
n ÿ i“1 LipWpi´1qq ď }Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ηn ` 2 NTRF ď L2R2 n ` 3 NTRF,
where the second inequality follows from facts that W˚ P BpWp0q, R ¨m´1{2q and η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^L´1q ˘
. By Lemma 4.3 in (Ji and Telgarsky, 2020) and the fact that EipWpi´1qq ď LipWpi´1qq, we have
1
n
n ÿ i“1 L0´1D pW pi´1qq ď 2 n n ÿ i“1 EDpWpi´1qq
ď 8 n
n ÿ i“1 EipWpi´1qq ` 8 logp1{δq n
ď 8L 2R2 n ` 8 logp1{δq n ` 24 NTRF.
This completes the proof of the second part.
B PROOF OF RESULTS IN SECTION 4
B.1 PROOF OF PROPOSITION 4.2
We first provide the following lemma which gives an upper bound of the neural network output at the initialization. Lemma B.1 (Lemma 4.4 in Cao and Gu (2019)). Under Assumption 3.1, if m ě C̄L logpnL{δq with some absolute constant C̄, with probability at least 1´ δ, we have
|fWp0qpxiq| ď C a logpn{δq
for some absolute constant C.
Proof of Proposition 4.2. Under Assumption 4.1, we can find a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu with řL l“1 }U˚l }2F “ 1 such that yix∇fWp0qpxiq,U˚y ě m1{2γ for at least 1 ´ ρ fraction of the training data. By Lemma B.1, for all i P rns we have |fWp0qpxiq| ď C a
logpn{δq for some absolute constant C. Then for any positive constant λ, we have for at least 1´ ρ portion of the data,
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě m1{2λγ ´ C a logpn{δq.
For this fraction of data, we can set
λ “ C 1 “ log1{2pn{δq ` logp1{ q ‰
m1{2γ ,
where C 1 is an absolute constant, and get
m1{2λγ ´ C a logpn{δq ě logp1{ q.
Now we let W˚ “ Wp0q ` λU˚. By the choice of R in Proposition 4.2, we have W˚ P BpWp0q, R ¨ m´1{2q. The above inequality implies that for at least 1 ´ ρ fraction of data, we have ` `
yiFWp0q,W˚pxiq ˘ ď . For the rest data, we have
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě ´C a logpn{δq ´ λ}∇fWp0q}22 ě ´C1R
for some absolute positive constant C1, where the last inequality follows from fact that }∇fWp0q}2 “ rOpm1{2q (see Lemma A.1 for detail). Then note that we use cross-entropy loss, it follows that for this fraction of training data, we have ` `
yiFWp0q,W˚pxiq ˘ ď C2R for some constant C2. Combining the results of these two fractions of training data, we can conclude
NTRF ď n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď p1´ ρq ` ρ ¨OpRq
This completes the proof.
B.2 PROOF OF PROPOSITION 4.4
Proof of Proposition 4.4. We are going to prove that Assumption 4.3 implies the existence of a good function in the NTRF function class.
By Definition 3.2 and the definition of cross-entropy loss, our goal is to prove that there exists a collection of matrices W “ tW1,W2u satisfying maxt}W1 ´Wp0q1 }F , }W2 ´W p0q 2 }2u ď R ¨m´1{2 such that yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰
ě logp2{ q. We first consider∇W1fWp0qpxiq, which has the form
p∇W1fWp0qpxiq ˘ j “ m1{2 ¨ wp0q2,j ¨ σ 1pxwp0q1,j ,xiyq ¨ xi.
Note that wp0q2,j and w p0q 1,j are independently generated from N p0, 1{mq and N p0, 2I{mq respectively, thus we have Pp|wp0q2,j | ě 0.47m´1{2q ě 1{2. By Hoeffeding’s inequality, we know that with probability at least 1 ´ expp´m{8q, there are at least m{4 nodes, whose union is denoted by S, satisfying |wp0q2,j | ě 0.47m´1{2. Then we only focus on the nodes in the set S. Note that W p0q 1 and W p0q 2 are independently generated. Then by Assumption 4.3 and Hoeffeding’s inequality, there exists a function up¨q : Rd Ñ Rd such that with probability at least 1´ δ1,
1
|S| ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq ě γ ´
d
2 logp1{δ1q |S| .
Define vj “ upwp0q1,j q{w2,j if |w2,j | ě 0.47m´1{2 and vj “ 0 otherwise. Then we have m ÿ
j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq “ ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq
ě |S|γ ´ a 2|S| logp1{δ1q. Set δ “ 2nδ1 and apply union bound, we have with probability at least 1´ δ{2,
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ ´ a 2|S| logp2n{δq.
Therefore, note that with probability at least 1 ´ expp´m{8q, we have |S| ě m{4. Moreover, in Assumption 4.3, by yi P t˘1u and |σ1p¨q|, }up¨q}2, }xi}2 ď 1 for i “ 1, . . . , n, we see that γ ď 1. Then if m ě 32 logpn{δq{γ2, with probability at least 1´ δ{2´ exp ` ´ 4 logpn{δq{γ2 ˘
ě 1´ δ, m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ{2.
Let U “ pv1,v2, ¨ ¨ ¨ ,vmqJ{ a m|S|, we have
yix∇W1fWp0qpxiq,Uy “ 1 a
|S|
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě a |S|γ 2 ě m 1{2γ 4 ,
where the last inequality is by the fact that |S| ě m{4. Besides, note that by concentration and Gaussian tail bound, we have |fWp0qpxiq| ď C logpn{δq for some absolute constant C. Therefore, let W1 “Wp0q1 ` 4 ` logp2{ q ` C logpn{δq ˘ m´1{2U{γ and W2 “Wp0q2 , we have
yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰ ě logp2{ q. (B.1)
Note that }up¨q}2 ď 1, we have }U}F ď 1{0.47 ď 2.2. Therefore, we further have }W1 ´ W p0q 1 }F ď 8.8γ´1 ` logp2{ q ` C logpn{δq ˘
¨ m´1{2. This implies that W P BpWp0q, Rq with R “ O ` log ` n{pδ q ˘ {γ ˘
. Applying the inequality `plogp2{ qq ď on (B.1) gives `pyi ¨ FWp0q,Wpxiqq ď
for all i “ 1, . . . , n. This completes the proof.
B.3 PROOF OF PROPOSITION 4.6
Based on our theoretical analysis, the major goal is to show that there exist certain choices of R and m such that the best NTRF model in the function class FpWp0q, Rq can achieve training error. In this proof, we will prove a stronger results by showing that given the quantities of R and m specificed in Proposition 4.6, there exists a NTRF model with parameter W˚ that satisfies n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . In order to do so, we consider training the NTRF model via a different surrogate loss function. Specifically, we consider squared hinge loss r`pxq “ ` maxtλ´ x, 0u ˘2
, where λ denotes the target margin. In the later proof, we choose λ “ logp1{ q ` 1 such that the condition r`pxq ď 1 can guarantee that x ě logp q. Moreover, we consider using gradient flow, i.e., gradient descent with infinitesimal step size, to train the NTRF model. Therefore, in the remaining part of the proof, we consider optimizing the NTRF parameter W with the loss function
rLSpWq “ 1
n
n ÿ
i“1
r` ` yiFWp0q,Wpxiq ˘ .
Moreover, for simplicity, we only consider optimizing parameter in the last hidden layer (i.e., WL´1). Then the gradient flow can be formulated as
dWL´1ptq dt “ ´∇WL´1 rLSpWptqq, dWlptq dt “ 0 for any l ‰ L´ 1.
Note that the NTRF model is a linear model, thus by Definition 3.2, we have
∇WL´1 rLSpWptqq “ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇WL´1FWp0q,Wptqpxiq
“ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇ W p0q L´1 fWp0qpxiq. (B.2)
Then it is clear that∇WL´1 rLSpWptqq has fixed direction throughout the optimization. In order to prove the convergence of gradient flow and characterize the quantity of R, We first provide the following lemma which gives an upper bound of the NTRF model output at the initialization.
Then we provide the following lemma which characterizes a lower bound of the Frobenius norm of the partial gradient∇WL´1 rLSpWq.
Lemma B.2 (Lemma B.5 in Zou et al. (2019)). Under Assumptions 3.1 and 4.5, if m “ rΩpn2φ´1q, then for all t ě 0, with probability at least 1´ exp ` ´Opmφ{nq ˘
, there exist a positive constant C such that
}∇WL´1 rLSpWptqq}2F ě Cmφ
n5
„ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
2
.
We slightly modified the original version of this lemma since we use different models (we consider NTRF model while Zou et al. (2019) considers neural network model). However, by (B.2), it is clear that the gradient ∇rLSpWq can be regarded as a type of the gradient for neural network model at the initialization (i.e.,∇WL´1LSpWp0qq) is valid. Now we are ready to present the proof.
Proof of Proposition 4.6. Recall that we only consider training the last hidden weights, i.e., WL´1, via gradient flow with squared hinge loss, and our goal is to prove that gradient flow is able to find a NTRF model within the function class FpWp0q, Rq around the initialization, i.e., achieving n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Let Wptq be the weights at time t, gradient flow implies that
drLSpWptqq dt “ ´}∇WL´1 rLSpWptqq}2F ď ´ Cmφ n5
ˆ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
˙2
“ 4Cmφ rLSpWptqq n3 ,
where the first equality is due to the fact that we only train the last hidden layer, the first inequality is by Lemma B.2 and the second equality follows from the fact that r`1p¨q “ ´2 b
r`p¨q. Solving the above inequality gives
rLSpWptqq ď rLSpWp0qq ¨ exp ˆ ´ 4Cmφt n3 ˙ . (B.3)
Then, set T “ O ` n3m´1φ´1 ¨ logprLSpWp0qq{ 1q ˘ and 1 “ 1{n, we have rLSpWptqq ď 1. Then it follows that r` `
yiFWp0q,Wptqpxiq ˘ ď 1, which implies that yiFWp0q,Wptqpxiq ě logp q and thus n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Therefore, WpT q is exactly the NTRF model we are looking for.
The next step is to characterize the distance between WpT q and Wp0q in order to characterize the quantity of R. Note that }∇WL´1 rLSpWptqq}2F ě 4CmφrLSpWptqq{n3, we have
d
b
rLSpWptqq dt “ ´ }∇WL´1 rLSpWptqq}2F
2
b rLSpWptqq ď ´}∇WL´1 rLSpWptqq}F ¨
C1{2m1{2φ1{2
n3{2 .
Taking integral on both sides and rearranging terms, we have ż T
t“0 }∇WL´1 rLSpWptqq}Fdt ď
n3{2 C1{2m1{2φ1{2 ¨ ˆ b rLSpWp0qq ´ b rLSpWptqq ˙ .
Note that the L.H.S. of the above inequality is an upper bound of }Wptq ´Wp0q}F , we have for any t ě 0,
}Wptq ´Wp0q}F ď n3{2 C1{2m1{2φ1{2 ¨ b rLSpWp0qq “ O ˆ n3{2 log ` n{pδ q ˘ m1{2φ1{2 ˙ ,
where the second inequality is by Lemma B.1 and our choice of λ “ logp1{ q ` 1. This implies that there exists a point W˚ within the class FpWp0q, Rq with
R “ O ˆ n3{2 log ` n{pδ q ˘
φ1{2
˙
such that
NTRF :“ n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď .
Then by Theorem 3.3, and, more specifically, (A.1), we can compute the minimal required neural network width as follows,
m “ rΩpR8L22q “ rΩ ˜ L22n12
φ4
¸
.
This completes the proof.
C PROOF OF TECHNICAL LEMMAS
Here we provide the proof of Lemmas 5.1, A.3 and A.4.
C.1 PROOF OF LEMMA 5.1
The detailed proof of Lemma 5.1 is given as follows.
Proof of Lemma 5.1. Based on the update rule of gradient descent, i.e., Wpt`1q “ Wptq ´ η∇WLSpWptqq, we have the following calculation.
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
“ 2η n
n ÿ i“1 xWptq ´W˚,∇WLipWptqqy loooooooooooooooooooooomoooooooooooooooooooooon
I1
´ η2 L ÿ
l“1 }∇WlLSpWptqq}2F loooooooooooooomoooooooooooooon
I2
, (C.1)
where the equation follows from the fact that LSpWptqq “ n´1 řn i“1 LipWptqq. In what follows, we first bound the term I1 on the R.H.S. of (C.1) by approximating the neural network functions with linear models. By assumption, for t “ 0, . . . , t1 ´ 1, Wptq,W˚ P BpWp0q, τq. Therefore by the definition of apppτq,
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ fW˚pxiq ˘ ` apppτq (C.2)
Moreover, we also have
0 ď yi ¨ ` fW˚pxiq ´ fWp0qpxiq ´ x∇fWp0qpxiq,W˚ ´Wp0qy ˘ ` apppτq “ yi ¨ ` fW˚pxiq ´ FWp0q,W˚pxiq ˘ ` apppτq, (C.3)
where the equation follows by the definition of FWp0q,W˚pxq. Adding (C.3) to (C.2) and canceling the terms yi ¨ fW˚pxiq, we obtain that
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ FWp0q,W˚pxiq ˘ ` 2 apppτq. (C.4)
We can now give a lower bound on first term on the R.H.S. of (C.1). For i “ 1, . . . , n, applying the chain rule on the loss function gradients and utilizing (C.4), we have
xWptq ´W˚,∇WLipWptqqy “ `1 ` yifWptqpxiq ˘ ¨ yi ¨ xWptq ´W˚,∇WfWptqpxiqy ě `1 `
yifWptqpxiq ˘ ¨ ` yifWptqpxiq ´ yifW˚pxiq ` 2 apppτq ˘
ě p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘ , (C.5)
where the first inequality is by the fact that `1 ` yifWptqpxiq ˘ ă 0, the second inequality is by convexity of `p¨q and the fact that ´`1 `
yifWptqpxiq ˘ ď ` ` yifWptqpxiq ˘ .
We now proceed to bound the term I2 on the R.H.S. of (C.1). Note that we have `1p¨q ă 0, and therefore the Frobenius norm of the gradient∇WlLSpWptqq can be upper bounded as follows,
}∇WlLSpWptqq}F “ › › ›
›
1
n
n ÿ i“1 `1 ` yifWptqpxiq ˘ ∇WlfWptqpxiq › › › › F
ď 1 n
n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ¨ }∇WlfWptqpxiq}F ,
where the inequality follows by triangle inequality. We now utilize the fact that cross-entropy loss satisfies the inequalities ´`1p¨q ď `p¨q and ´`1p¨q ď 1. Therefore by definition of Mpτq, we have
L ÿ l“1 }∇WlLSpWptqq}2F ď O ` LMpτq2 ˘ ¨ ˆ 1 n n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ˙2
ď O ` LMpτq2 ˘ ¨ LSpWptqq. (C.6)
Then we can plug (C.5) and (C.6) into (C.1) and obtain
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
ě 2η n
n ÿ
i“1
”
p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘
ı
´O ` η2LMpτq2 ˘ ¨ LSpWptqq
ě „ 3
2 ´ 4 apppτq
ηLSpWptqq ´ 2η
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ ,
where the last inequality is by η “ OpL´1Mpτq´2q and merging the third term on the second line into the first term. Taking telescope sum from t “ 0 to t “ t1 ´ 1 and plugging in the definition 1 n řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF completes the proof.
C.2 PROOF OF LEMMA A.3
Proof of Lemma A.3. We first denote W “ BpWp0q, rR ¨ m´1{2q, and define the corresponding neural network function class and surrogate loss function class as F “ tfWpxq : W P Wu and G “ t´`ry ¨ fWpxqs : W PWu respectively. By standard uniform convergence results in terms of empirical Rademacher complexity (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), with probability at least 1´ δ we have
sup WPW |ESpWq ´ EDpWq| “ sup WPW
ˇ
ˇ
ˇ
ˇ
ˇ ´ 1 n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ ` Epx,yq„D`1 “ y ¨ fWpxq ‰
ˇ
ˇ
ˇ
ˇ
ˇ
ď 2pRnpGq ` C1
c
logp1{δq n ,
where C1 is an absolute constant, and
pRnpGq “ Eξi„Unifpt˘1uq
#
sup WPW
1
n
n ÿ i“1 ξi` 1“yi ¨ fWpxiq ‰
+
is the empirical Rademacher complexity of the function class G. We now provide two bounds on pRnpGq, whose combination gives the final result of Lemma A.3. First, by Corollary 5.35 in (Vershynin, 2010), with probability at least 1 ´ L ¨ expp´Ωpmqq, }Wp0ql }2 ď 3 for all l P rLs. Therefore for all W PW , we have }Wl}2 ď 4. Moreover, standard concentration inequalities on the norm of the first row of Wp0ql also implies that }Wl}2 ě 0.5 for all W PW and l P rLs. Therefore, an adaptation of the bound in (Bartlett et al., 2017)¶ gives
pRnpFq ď rO ˜
sup WPW
# m1{2? n ¨ « L ź
l“1 }Wl}2
ff ¨ « L ÿ
l“1
}WJl ´W p0qJ l } 2{3 2,1
}Wl}2{32
ff3{2+¸
ď rO ˜
sup WPW
#
4Lm1{2? n ¨ « L ÿ l“1 p ? m ¨ }WJl ´W p0qJ l }F q 2{3
ff3{2+¸
ď rO ˜ 4LL3{2 rR ¨ c m
n
¸
. (C.7)
We now derive the second bound on pRnpGq, which is inspired by the proof provided in (Cao and Gu, 2020). Since y P t`1, 1u, |`1pzq| ď 1 and `1pzq is 1-Lipschitz continuous, by standard empirical Rademacher complexity bounds (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), we have
pRnpGq ď pRnpFq “ Eξi„Unifpt˘1uq
«
sup WPW
1
n
n ÿ i“1 ξifWpxiq
ff
,
¶Bartlett et al. (2017) only proved the Rademacher complexity bound for the composition of the ramp loss and the neural network function. In our setting essentially the ramp loss is replaced with the ´`1p¨q function, which is bounded and 1-Lipschitz continuous. The proof in our setting is therefore exactly the same as the proof given in (Bartlett et al., 2017), and we can apply Theorem 3.3 and Lemma A.5 in (Bartlett et al., 2017) to obtain the desired bound we present here.
where pRnpFq is the empirical Rademacher complexity of the function class F . We have
pRnrFs ď Eξ
#
sup WPW
1
n
n ÿ i“1 ξi “ fWpxiq ´ FWp0q,Wpxiq ‰
+
looooooooooooooooooooooooooooomooooooooooooooooooooooooooooon
I1
`Eξ
#
sup WPW
1
n
n ÿ i“1 ξiFWp0q,Wpxiq
+
looo | 1. What is the main contribution of the paper regarding deep neural networks?
2. What are the strengths of the paper, particularly in its analysis and conclusions?
3. Do you have any questions or concerns regarding the paper's content, such as the applicability to smooth activation functions or the optimality of the rates obtained? | Review | Review
Overview
This paper greatly relaxed the rate of over-parametrization for deep neural networks. Previously, shallow networks required fewer parameters (polylog(eps)), while deep networks required a large number of parameters (\Omega (eps^{-14})). This paper shows that faster rates can be achieved in deep networks. The key result is the analysis around the initial values using the Taylor approximation, which is independent of the shallowness of the layers. This result allows the paper to achieve rates similar to existing shallow neural networks.
Comment.
The results are impactful, showing that global convergence can be said to be possible with over-parametrization, even in deep neural networks with smaller number of nodes. The paper is carefully written and the differences from existing research and the place of novelty are easy to see.
My question is what difference does this make to the smooth activation function case? Lemma 5.1 doesn't seem to make much use of the ReLU property, is the result similar to the smooth case in this case?
Another question relates to optimality. Is it possible to give an additional analysis of the theoretical limitations of the rates obtained here to see if they can be further improved?
The other one, which is not that important, is the value of 4^L that appears in Theorem 3.4. This value will be a major obstacle to thinking about deep neural nets, but I wonder if it can be explained. |
ICLR | Title
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Abstract
A recent line of research on deep learning focuses on the extremely overparameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target error ́1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ́1. Our results push the study of over-parameterized deep neural networks towards more practical settings.
1 INTRODUCTION
Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications. However, the success of deep learning has not been well-explained in theory. It remains mysterious why standard optimization algorithms tend to find a globally optimal solution, despite the highly non-convex landscape of the training loss function. Moreover, despite the extremely large amount of parameters, deep neural networks rarely over-fit, and can often generalize well to unseen data and achieve good test accuracy. Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory.
Recent breakthroughs have shed light on the optimization and generalization of deep neural networks (DNNs) under the over-parameterized setting, where the hidden layer width is extremely large (much larger than the number of training examples). It has been shown that with the standard random initialization, the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel (NTK) (Jacot et al., 2018; Arora et al., 2019b). In the neural tangent kernel regime (or lazy training regime (Chizat et al., 2019)), the neural network function behaves similarly as its first-order Taylor expansion at initialization (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019b; Cao and Gu, 2019), which enables feasible optimization and generalization analysis. In terms of optimization, a line of work (Du et al., 2019b; Allen-Zhu et al., 2019b; Zou et al., 2019; Zou and Gu, 2019) proved that for sufficiently wide neural networks, (stochastic) gradient descent (GD/SGD) can successfully find a global optimum of the training loss function. For generalization, Allen-Zhu et al. (2019a); Arora et al. (2019a); Cao and Gu (2019) established generalization bounds of neural networks trained with (stochastic) gradient descent, and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space (RKHS) or the corresponding random feature function class.
Although existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks, they require the neural network to be extremely wide. ∗Equal contribution.
The typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1. As there still remains a huge gap between such network width requirement and the practice, many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization (Oymak and Soltanolkotabi, 2019; Zou and Gu, 2019; Kawaguchi and Huang, 2019; Bai and Lee, 2019). For two-layer ReLU networks, a recent work (Ji and Telgarsky, 2020) showed that when the training data are well separated, polylogarithmic width is sufficient to guarantee good optimization and generalization performances. However, their results cannot be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous, which cannot be satisfied by DNNs. Therefore, whether deep neural networks can be learned with such a mild over-parameterization is still an open problem.
In this paper, we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs. In particular, unlike the existing works that require the DNNs to behave very close to a linear model (up to some small approximation error), we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs. Thanks to the relaxed requirement on the linear approximation error, a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved. We summarize our contributions as follows:
• We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class (Cao and Gu, 2019), a set of linear functions over random features. Specifically, we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class, where R is the radius of the NTRF function class.
• We also establish the generalization guarantees for both GD and SGD in the same setting. Specifically, we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q, while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover, we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively, which are tighter than existing bounds for learning deep ReLU networks (Cao and Gu, 2019), and match the best results when reduced to the two-layer cases (Arora et al., 2019b; Ji and Telgarsky, 2020).
• We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature. We show if a large fraction of the training data are well separated, the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q, which has a logarithmic dependence on the target error and sample size n. Compared with existing results (Cao and Gu, 2020; Ji and Telgarsky, 2020) which require all training data points to be separated in the NTK regime, our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data.
For the ease of comparison, we summarize our results along with the most related previous results in Table 1, in terms of data assumption, the over-parameterization condition and sample complexity. It can be seen that under data separation assumption (See Sections 4.1, 4.2), our result improves existing results for learning deep neural networks by only requiring a polylogpn, ´1q network width. Notation. For two scalars a and b, we denote a^ b “ minta, bu. For a vector x P Rd we use }x}2 to denote its Euclidean norm. For a matrix X, we use }X}2 and }X}F to denote its spectral norm and Frobenius norm respectively, and denote by Xij the entry of X at the i-th row and j-th column. Given two matrices X and Y with the same dimension, we denote xX,Yy “ ř
i,jXijYij .
Given a collection of matrices W “ tW1, ¨ ¨ ¨ ,WLu P bLl“1Rmlˆm 1 l and a function fpWq over bLl“1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl“1. We also denote BpW, τq “ W1 : maxlPrLs }W1l ´Wl}F ď τ ( for τ ě 0. For two collection of matrices A “ tA1, ¨ ¨ ¨ ,Anu, B “ tB1, ¨ ¨ ¨ ,Bnu, we denote xA,By “
řn i“1xAi,Biy and }A}2F “ řn i“1 }Ai}2F .
Algorithm 1 Gradient descent with random initialization Input: Number of iterations T , step size η, training set S “ tpxi, yiqni“1u, initialization Wp0q for t “ 1, 2, . . . , T do
Update Wptq “Wpt´1q ´ η ¨∇WLSpWpt´1qq. end for Output: Wp0q, . . . ,WpT q.
Given two sequences txnu and tynu, we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1, xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2, and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3, C4 ą 0. We also use rOp¨q, rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively. Additionally, we denote xn “ polypynq if xn “ OpyDn q for some positive constant D, and xn “ polylogpynq if xn “ polyplogpynqq.
2 PRELIMINARIES ON LEARNING NEURAL NETWORKS
In this section, we introduce the problem setting in this paper, including definitions of the neural network and loss functions, and the training algorithms, i.e., GD and SGD with random initialization.
Neural network function. Given an input x P Rd, the output of deep fully-connected ReLU network is defined as follows,
fWpxq “ m1{2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q, where W1 P Rmˆd, W2, ¨ ¨ ¨ ,WL´1 P Rmˆm, WL P R1ˆm, and σpxq “ maxt0, xu is the ReLU activation function. Here, without loss of generality, we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers, as long as the smallest width satisfies our overparameterization condition. We denote the collection of all weight matrices as W “ tW1, . . . ,WLu. Loss function. Given training dataset txi, yiui“1,...,n with input xi P Rd and output yi P t´1,`1u, we define the training loss function as
LSpWq “ 1
n
n ÿ i“1 LipWq,
where LipWq “ ` ` yifWpxiq ˘ “ log ` 1` expp´yifWpxiqq ˘ is defined as the cross-entropy loss.
Algorithms. We consider both GD and SGD with Gaussian random initialization. These two algorithms are displayed in Algorithms 1 and 2 respectively. Specifically, the entries in Wp0q1 , ¨ ¨ ¨ ,W p0q L´1 are generated independently from univariate Gaussian distributionNp0, 2{mq and the entries in Wp0qL are generated independently from Np0, 1{mq. For GD, we consider using the full gradient to update the model parameters. For SGD, we use a new training data point in each iteration.
Note that our initialization method in Algorithms 1, 2 is the same as the widely used He initialization (He et al., 2015). Our neural network parameterization is also consistent with the parameterization used in prior work on NTK (Jacot et al., 2018; Allen-Zhu et al., 2019b; Du et al., 2019a; Arora et al., 2019b; Cao and Gu, 2019).
Algorithm 2 Stochastic gradient desecent (SGD) with random initialization Input: Number of iterations n, step size η, initialization Wp0q for i “ 1, 2, . . . , n do
Draw pxi, yiq from D and compute the corresponding gradient∇WLipWpi´1qq. Update Wpiq “Wpi´1q ´ η ¨∇WLipWpi´1qq.
end for Output: Randomly choose xW uniformly from tWp0q, . . . ,Wpn´1qu.
3 MAIN THEORY
In this section, we present the optimization and generalization guarantees of GD and SGD for learning deep ReLU networks. We first make the following assumption on the training data points. Assumption 3.1. All training data points satisfy }xi}2 “ 1, i “ 1, . . . , n.
This assumption has been widely made in many previous works (Allen-Zhu et al., 2019b;c; Du et al., 2019b;a; Zou et al., 2019) in order to simplify the theoretical analysis. This assumption can be relaxed to be upper bounded and lower bounded by some constant.
In the following, we give the definition of Neural Tangent Random Feature (NTRF) (Cao and Gu, 2019), which characterizes the functions learnable by over-parameterized ReLU networks.
Definition 3.2 (Neural Tangent Random Feature, (Cao and Gu, 2019)). Let Wp0q be the initialization weights, and FWp0q,Wpxq “ fWp0qpxq ` x∇fWp0qpxq,W ´Wp0qy be a function with respect to the input x. Then the NTRF function class is defined as follows
FpWp0q, Rq “ FWp0q,Wp¨q : W P BpWp0q, R ¨m´1{2q ( .
The function class FWp0q,Wpxq consists of linear models over random features defined based on the network gradients at the initialization. Therefore it captures the key “almost linear” property of wide neural networks in the NTK regime (Lee et al., 2019; Cao and Gu, 2019). In this paper, we use the NTRF function class as a reference class to measure the difficulty of a learning problem. In what follows, we deliver our main theoretical results regarding the optimization and generalization guarantees of learning deep ReLU networks. We study both GD and SGD with random initialization (presented in Algorithms 1 and 2).
3.1 GRADIENT DESCENT
The following theorem establishes the optimization guarantee of GD for training deep ReLU networks for binary classification. Theorem 3.3. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ over the initialization, GD with step size η “ ΘpL´1m´1q can train a neural network to achieve at most 3 NTRF training loss within T “ O `
L2R2 ´1NTRF ˘ iterations.
Theorem 3.3 shows that the deep ReLU network trained by GD can compete with the best function in the NTRF function class FpWp0q, Rq if the network width has a polynomial dependency in R and L and a logarithmic dependency in n and 1{δ. Moreover, if the NTRF function class with R “ rOp1q can learn the training data well (i.e., NTRF is less than a small target error ), a polylogarithmic (in terms of n and ´1) network width suffices to guarantee the global convergence of GD, which directly improves over-paramterization condition in the most related work (Cao and Gu, 2019). Besides, we remark here that this assumption on the NTRF function class can be easily satisfied when the training data admits certain separability conditions, which we discuss in detail in Section 4.
Compared with the results in (Ji and Telgarsky, 2020) which give similar network width requirements for two-layer networks, our result works for deep networks. Moreover, while Ji and Telgarsky (2020)
essentially required all training data to be separable by a function in the NTRF function class with a constant margin, our result does not require such data separation assumptions, and allows the NTRF function class to misclassify a small proportion of the training data points∗.
We now characterize the generalization performance of neural networks trained by GD. We denote L0´1D pWq “ Epx,yq„Dr1tfWpxq ¨ y ă 0us as the expected 0-1 loss (i.e., expected error) of fWpxq. Theorem 3.4. Under the same assumptions as Theorem 3.3, with probability at least 1´δ, the iterate Wptq of Algorithm 1 satisfies that
L0´1D pW ptqq ď 2LSpWptqq ` rO
˜
4LL2R
c
m n ^ ˜ L3{2R? n ` L 11{3R4{3 m1{6 ¸¸ `O ˜ c logp1{δq n ¸
for all t “ 0, . . . , T .
Theorem 3.4 shows that the test error of the trained neural network can be bounded by its training error plus statistical error terms. Note that the statistical error terms is in the form of a minimum between two terms 4LL2R a m{n and L3{2R{ ? n` L11{3R4{3{m1{6. Depending on the network width m, one of these two terms will be the dominating term and diminishes for large n: (1) if m “ opnq, the statistical error will be 4LL2R a
m{n, and diminishes as n increases; and (2) if m “ Ωpnq, the statistical error is L3{2R{ ? n` L11{3R4{3{m1{6, and again goes to zero as n increases. Moreover,
in this paper we have a specific focus on the setting m “ rOp1q, under which Theorem 3.4 gives a statistical error of order rOpn´1{2q. This distinguishes our result from previous generalization bounds for deep networks (Cao and Gu, 2020; 2019), which cannot be applied to the setting m “ rOp1q. We note that for two-layer ReLU networks (i.e., L “ 2) Ji and Telgarsky (2020) proves a tighter rOp1{n1{2q generalization error bound regardless of the neural networks width m, while our result (Theorem 3.4), in the two-layer case, can only give rOp1{n1{2q generalization error bound when m “ rOp1q or m “ rΩpn3q. However, different from our proof technique that basically uses the (approximated) linearity of the neural network function, their proof technique largely relies on the 1-homogeneous property of the neural network, which restricted their theory in two-layer cases. An interesting research direction is to explore whether a rOp1{n1{2q generalization error bound can be also established for deep networks (regardless of the network width), which we will leave it as a future work.
3.2 STOCHASTIC GRADIENT DESCENT
Here we study the performance of SGD for training deep ReLU networks. The following theorem establishes a generalization error bound for the output of SGD. Theorem 3.5. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ, SGD with step size η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ achieves
ErL0´1D pxWqs ď 8L2R2 n ` 8 logp2{δq n ` 24 NTRF,
where the expectation is taken over the uniform draw of xW from tWp0q, . . . ,Wpn´1qu.
For any ą 0, Theorem 3.5 gives a rOp ´1q sample complexity for deep ReLU networks trained with SGD to achieveOp NTRF` q test error. Our result extends the result for two-layer networks proved in (Ji and Telgarsky, 2020) to multi-layer networks. Theorem 3.5 also provides sharper results compared with Allen-Zhu et al. (2019a); Cao and Gu (2019) in two aspects: (1) the sample complexity is improved from n “ rOp ´2q to n “ rOp ´1q; and (2) the overparamterization condition is improved from m ě polyp ´1q to m “ rΩp1q. ∗A detailed discussion is given in Section 4.2.
4 DISCUSSION ON THE NTRF CLASS
Our theoretical results in Section 3 rely on the radius (i.e.,R) of the NTRF function classFpWp0q, Rq and the minimum training loss achievable by functions in FpWp0q, Rq, i.e., NTRF. Note that a larger R naturally implies a smaller NTRF, but also leads to worse conditions on m. In this section, for any (arbitrarily small) target error rate ą 0, we discuss various data assumptions studied in the literature under which our results can lead to Op q training/test errors, and specify the network width requirement.
4.1 DATA SEPARABILITY BY NEURAL TANGENT RANDOM FEATURE
In this subsection, we consider the setting where a large fraction of the training data can be linearly separated by the neural tangent random features. The assumption is stated as follows.
Assumption 4.1. There exists a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu satisfying řL l“1 }U˚l }2F “ 1, such that for at least p1´ ρq fraction of training data we have
yix∇fWp0qpxiq,U˚y ě m1{2γ,
where γ is an absolute positive constant† and ρ P r0, 1q.
The following corollary provides an upper bound of NTRF under Assumption 4.1 for some R.
Proposition 4.2. Under Assumption 4.1, for any , δ ą 0, if R ě C “ log1{2pn{δq ` logp1{ q ‰
{γ for some absolute constant C, then with probability at least 1´ δ,
NTRF :“ inf FPFpWp0q,Rq
n´1 n ÿ
i“1 ` ` yiF pxiq ˘ ď ` ρ ¨OpRq.
Proposition 4.2 covers the setting where the NTRF function class is allowed to misclassify training data, while most of existing work typically assumes that all training data can be perfectly separated with constant margin (i.e., ρ “ 0) (Ji and Telgarsky, 2020; Shamir, 2020). Our results show that for sufficiently small misclassification ratio ρ “ Op q, we have NTRF “ rOp q by choosing the radius parameter R logarithimic in n, δ´1, and ´1. Substituting this result into Theorems 3.3, 3.4 and 3.5, it can be shown that a neural network with width m “ polypL, logpn{δq, logp1{ qq ˘ suffices to guarantee good optimization and generalization performances for both GD and SGD. Consequently, we can obtain that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.2 DATA SEPARABILITY BY SHALLOW NEURAL TANGENT MODEL
In this subsection, we study the data separation assumption made in Ji and Telgarsky (2020) and show that our results cover this particular setting. We first restate the assumption as follows.
Assumption 4.3. There exists up¨q : Rd Ñ Rd and γ ě 0 such that }upzq}2 ď 1 for all z P Rd, and
yi
ż
Rd σ1pxz,xiyq ¨ xupzq,xiydµNpzq ě γ
for all i P rns, where µN p¨q denotes the standard normal distribution.
Assumption 4.3 is related to the linear separability of the gradients of the first layer parameters at random initialization, where the randomness is replaced with an integral by taking the infinite width limit. Note that similar assumptions have also been studied in (Cao and Gu, 2020; Nitanda and Suzuki, 2019; Frei et al., 2019). The assumption made in (Cao and Gu, 2020; Frei et al., 2019) uses gradients with respect to the second layer weights instead of the first layer ones. In the following, we mainly focus on Assumption 4.3, while our result can also be generalized to cover the setting in (Cao and Gu, 2020; Frei et al., 2019).
†The factor m1{2 is introduced here since }∇Wp0qfpxiq}F is typically of order Opm 1{2q.
In order to make a fair comparison, we reduce our results for multilayer networks to the two-layer setting. In this case, the neural network function takes form
fWpxq “ m1{2W2σpW1xq. Then we provide the following proposition, which states that Assumption 4.3 implies a certain choice of R “ rOp1q such the the minimum training loss achieved by the function in the NTRF function class FpWp0q, Rq satisfies NTRF “ Op q, where is the target error. Proposition 4.4. Suppose the training data satisfies Assumption 4.3. For any , δ ą 0, let R “ C “ logpn{δq ` logp1{ q ‰
{γ for some large enough absolute constant C. If the neural network width satisfies m “ Ω ` logpn{δq{γ2 ˘
, then with probability at least 1 ´ δ, there exist FWp0q,Wpxiq P FpWp0q, Rq such that ` `
yi ¨ FWp0q,Wpxiq ˘ ď ,@i P rns.
Proposition 4.4 shows that under Assumption 4.3, there exists FWp0q,Wp¨q P FpWp0q, Rq with R “ rOp1{γq such that the cross-entropy loss of FWp0q,Wp¨q at each training data point is bounded by . This implies that NTRF ď . Moreover, by applying Theorem 3.3 with L “ 2, the condition on the neural network width becomes m “ rΩp1{γ8q‡, which matches the results proved in Ji and Telgarsky (2020). Moreover, plugging these results on m and NTRF into Theorems 3.4 and 3.5, we can conclude that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.3 CLASS-DEPENDENT DATA NONDEGENERATION
In previous subsections, we have shown that under certain data separation conditions NTRF can be sufficiently small while the corresponding NTRF function class has R of order rOp1q. Thus neural networks with polylogarithmic width enjoy nice optimization and generalization guarantees. In this part, we consider the following much milder data separability assumption made in Zou et al. (2019). Assumption 4.5. For all i ‰ i1 if yi ‰ yi1 , then }xi ´ xj}2 ě φ for some absolute constant φ.
In contrast to the conventional data nondegeneration assumption (i.e., no duplicate data points) made in Allen-Zhu et al. (2019b); Du et al. (2019b;a); Zou and Gu (2019)§, Assumption 4.5 only requires that the data points from different classes are nondegenerate, thus we call it class-dependent data nondegeneration.
We have the following proposition which shows that Assumption 4.5 also implies the existence of a good function that achieves training error, in the NTRF function class with a certain choice of R. Proposition 4.6. Under Assumption 4.5, if
R “ Ω ` n3{2φ´1{2 logpnδ´1 ´1q ˘ , m “ rΩ ` L22n12φ´4 ˘ ,
we have NTRF ď with probability at least 1´ δ.
Proposition 4.6 suggests that under Assumption 4.5, in order to guarantee NTRF ď , the size of NTRF function class needs to be Ωpn3{2q. Plugging this into Theorems 3.4 and 3.5 leads to vacuous bounds on the test error. This makes sense since Assumption 4.5 basically covers the “random label” setting, which is impossible to be learned with small generalization error. Moreover, we would like to point out our theoretical analysis leads to a sharper over-parameterization condition than that proved in Zou et al. (2019), i.e., m “ rΩ ` n14L16φ´4`n12L16φ´4 ´1 ˘
, if the network depth satisfies L ď rOpn1{3 _ ´1{6q.
5 PROOF SKETCH OF THE MAIN THEORY
In this section, we introduce a key technical lemma in Section 5.1, based on which we provide a proof sketch of Theorems 3.3. The full proof of all our results can be found in the appendix. ‡We have shown in the proof of Theorem 3.3 that m “ rΩpR8q (see (A.1) for more detail). §Specifically, Allen-Zhu et al. (2019b); Zou and Gu (2019) require that any two data points (rather than data points from different classes) are separated by a positive distance. Zou and Gu (2019) shows that this assumption is equivalent to those made in Du et al. (2019b;a), which require that the composite kernel matrix is strictly positive definite.
5.1 A KEY TECHNICAL LEMMA
Here we introduce a key technical lemma used in the proof of Theorem 3.3.
Our proof is based on the key observation that near initialization, the neural network function can be approximated by its first-order Taylor expansion. In the following, we first give the definition of the linear approximation error in a τ -neighborhood around initialization.
apppτq :“ sup i“1,...,n sup W1,WPBpWp0q,τq
ˇ ˇfW1pxiq ´ fWpxiq ´ x∇fWpxiq,W1 ´Wy ˇ ˇ.
If all the iterates of GD stay inside a neighborhood around initialization with small linear approximation error, then we may expect that the training of neural networks should be similar to the training of the corresponding linear model, where standard optimization techniques can be applied. Motivated by this, we also give the following definition on the gradient upper bound of neural networks around initialization, which is related to the Lipschitz constant of the optimization objective function.
Mpτq :“ sup i“1,...,n sup l“1,...,L sup WPBpWp0q,τq }∇WlfWpxiq}F .
By definition, we can choose W˚ P BpWp0q, Rm´1{2q such that n´1 řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. Then we have the following lemma. Lemma 5.1. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wptq P BpWp0q, τq for all 0 ď t ď t1 ´ 1. Then it holds that
1
t1
t1´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ` 2t1η NTRF t1η ` 3 2 ´ 4 apppτq ˘ .
Lemma 5.1 plays a central role in our proof. In specific, if Wptq P BpWp0q, τq for all t ď t1, then Lemma 5.1 implies that the average training loss is in the same order of NTRF as long as the linear approximation error apppτq is bounded by a positive constant. This is in contrast to the proof in Cao and Gu (2019), where apppτq appears as an additive term in the upper bound of the training loss, thus requiring apppτq “ Op NTRFq to achieve the same error bound as in Lemma 5.1. Since we can show that app “ rOpm´1{6q (See Section A.1), this suggests that m “ rΩp1q is sufficient to make the average training loss in the same order of NTRF.
Compared with the recent results for two-layer networks by Ji and Telgarsky (2020), Lemma 5.1 is proved with different techniques. In specific, the proof by Ji and Telgarsky (2020) relies on the 1-homogeneous property of the ReLU activation function, which limits their analysis to two-layer networks with fixed second layer weights. In comparison, our proof does not rely on homogeneity, and is purely based on the linear approximation property of neural networks and some specific properties of the loss function. Therefore, our proof technique can handle deep networks, and is potentially applicable to non-ReLU activation functions and other network architectures (e.g, Convolutional neural networks and Residual networks).
5.2 PROOF SKETCH OF THEOREM 3.3
Here we provide a proof sketch of Theorem 3.3. The proof consists of two steps: (i) showing that all T iterates stay close to initialization, and (ii) bounding the empirical loss achieved by gradient descent. Both of these steps are proved based on Lemma 5.1.
Proof sketch of Theorem 3.3. Recall that we choose W˚ P BpWp0q, Rm´1{2q such that n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. We set τ “ rOpL1{2m´1{2Rq, which is chosen slightly larger than m´1{2R since Lemma 5.1 requires the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . Then by Lemmas 4.1 and B.3 in Cao and Gu (2019) we know that apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set m “ rΩpR8L22q to ensure that apppτq ď 1{8.
Then we proceed to show that all iterates stay inside the region BpWp0q, τq. Since the L.H.S. of Lemma 5.1 is strictly positive and apppτq ď 1{8, we have for all t ď T ,
}Wp0q ´W˚}2F ´ }Wptq ´W˚}2F ě ´2tη NTRF,
which gives an upper bound of }Wptq ´W˚}F . Then by the choice of η, T , triangle inequality, and a simple induction argument, we see that }Wptq ´Wp0q}F ď m´1{2R ` ? 2Tη NTRF “ rOpL1{2m´1{2Rq, which verifies that Wptq P BpWp0q, τq for t “ 0, . . . , T ´ 1. The second step is to show that GD can find a neural network with at most 3 NTRF training loss within T iterations. To show this, by the bound given in Lemma 5.1 with app ď 1{8, we drop the terms }Wptq ´W˚}2F and rearrange the inequality to obtain
1
T
T´1 ÿ t“0 LSpWptqq ď 1 ηT }Wp0q ´W˚}2F ` 2 NTRF.
We see that T is large enough to ensure that the first term in the bound above is smaller than NTRF. This implies that the best iterate among Wp0q, . . . ,WpT´1q achieves an empirical loss at most 3 NTRF.
6 CONCLUSION
In this paper, we established the global convergence and generalization error bounds of GD and SGD for training deep ReLU networks for the binary classification problem. We show that a network width condition that is polylogarithmic in the sample size n and the inverse of target error ´1 is sufficient to guarantee the learning of deep ReLU networks. Our results resolve an open question raised in Ji and Telgarsky (2020).
ACKNOWLEDGEMENT
We would like to thank the anonymous reviewers for their helpful comments. ZC, YC and QG are partially supported by the National Science Foundation CAREER Award 1906169, IIS-2008981 and Salesforce Deep Learning Research Award. DZ is supported by the Bloomberg Data Science Ph.D. Fellowship. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
A PROOF OF MAIN THEOREMS
In this section we provide the full proof of Theorems 3.3, 3.4 and 3.5.
A.1 PROOF OF THEOREM 3.3
We first provide the following lemma which is useful in the subsequent proof. Lemma A.1 (Lemmas 4.1 and B.3 in Cao and Gu (2019)). There exists an absolute constant κ such that, with probability at least 1 ´ OpnL2q expr´Ωpmτ2{3Lqs, for any τ ď κL´6rlogpmqs´3{2, it holds that
apppτq ď rO ` τ4{3L3m1{2 ˘ , Mpτq ď rOp ? mq.
Proof of Theorem 3.3. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. Note that to apply Lemma 5.1, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q (A.1)
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ. Then by Lemma 5.1, we have with probability at least 1´ δ, we have
}Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ě η
t1´1 ÿ t“0 LSpWptqq ´ 2t1η NTRF (A.2)
as long as Wp0q, . . . ,Wpt 1´1q P BpWp0q, τq. In the following proof we choose η “ ΘpL´1m´1q and T “ rLR2m´1η´1 ´1NTRFs.
We prove the theorem by two steps: 1) we show that all iterates tWp0q, ¨ ¨ ¨ ,WpT qu will stay inside the region BpWp0q, τq; and 2) we show that GD can find a neural network with at most 3 NTRF training loss within T iterations.
All iterates stay inside BpWp0q, τq. We prove this part by induction. Specifically, given t1 ď T , we assume the hypothesis Wptq P BpWp0q, τq holds for all t ă t1 and prove that Wpt1q P BpWp0q, τq. First, it is clear that Wp0q P BpWp0q, τq. Then by (A.2) and the fact that LSpWq ě 0, we have
}Wpt 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2ηt1 NTRF
Note that T “ rLR2m´1η´1 ´1NTRFs and W˚ P BpWp0q, R ¨m´1{2q, we have L ÿ
l“1 }Wpt 1q l ´W ˚ l }2F “ }Wpt 1q ´W˚}2F ď CLR2m´1,
where C ě 4 is an absolute constant. Therefore, by triangle inequality, we further have the following for all l P rLs,
}Wpt 1q l ´W p0q l }F ď }W pt1q l ´W ˚ l }F ` }W p0q l ´W ˚ l }F
ď ? CLRm´1{2 `Rm´1{2 ď 2 ? CLRm´1{2. (A.3)
Therefore, it is clear that }Wpt 1q l ´W p0q l }F ď 2
? CLRm´1{2 ď τ based on our choice of τ
previously. This completes the proof of the first part.
Convergence of gradient descent. (A.2) implies
}Wp0q ´W˚}2F ´ }WpT q ´W˚}2F ě η ˆ T´1 ÿ
t“0 LSpWptqq ´ 2T NTRF
˙
.
Dividing by ηT on the both sides, we get
1
T
T´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ηT ` 2 NTRF ď LR2m´1 ηT ` 2 NTRF ď 3 NTRF,
where the second inequality is by the fact that W˚ P BpWp0q, R ¨ m´1{2q and the last inequality is by our choices of T and η which ensure that Tη ě LR2m´1 ´1NTRF. Notice that T “ rLR2m´1η´1 ´1NTRFs “ OpL2R2 ´1 NTRFq. This completes the proof of the second part, and we are able to complete the proof.
A.2 PROOF OF THEOREM 3.4
Following Cao and Gu (2020), we first introduce the definition of surrogate loss of the network, which is defined by the derivative of the loss function.
Definition A.2. We define the empirical surrogate error ESpWq and population surrogate error EDpWq as follows:
ESpWq :“ ´ 1
n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ , EDpWq :“ Epx,yq„D ´ `1 “ y ¨ fWpxq ‰( .
The following lemma gives uniform-convergence type of results for ESpWq utilizing the fact that ´`1p¨q is bounded and Lipschitz continuous.
Lemma A.3. For any rR, δ ą 0, suppose that m “ rΩpL12 rR2q ¨ rlogp1{δqs3{2. Then with probability at least 1´ δ, it holds that
|EDpWq ´ ESpWq| ď rO ˜ min # 4LL3{2 rR c m
n , L rR? n ` L
3 rR4{3
m1{6
+¸ `O ˜ c
logp1{δq n
¸
for all W P BpWp0q, rR ¨m´1{2q
We are now ready to prove Theorem 3.4, which combines the trajectory distance analysis in the proof of Theorem 3.3 with Lemma A.3.
Proof of Theorem 3.4. With exactly the same proof as Theorem 3.3, by (A.3) and induction we have Wp0q,Wp1q, . . . ,WpT q P BpWp0q, rRm´1{2q with rR “ Op ? LRq. Therefore by Lemma A.3, we have
|EDpWptqq ´ ESpWptqq| ď rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for all t “ 0, 1, . . . , T . Note that we have 1tz ă 0u ď ´2`1pzq. Therefore,
EL0´1D pW ptqq ď 2EDpWptqq
ď 2LSpWptqq ` rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for t “ 0, 1, . . . , T , where the last inequality is by ESpWq ď LSpWq because ´`1pzq ď `pzq for all z P R. This finishes the proof.
A.3 PROOF OF THEOREM 3.5
In this section we provide the full proof of Theorem 3.5. We first give the following result, which is the counterpart of Lemma 5.1 for SGD. Again we pick W˚ P BpWp0q, Rm´1{2q such that the loss of the corresponding NTRF model FWp0q,W˚pxq achieves NTRF. Lemma A.4. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wpn1q P BpWp0q, τq for all 0 ď n1 ď n´ 1. Then it holds that
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě
´3
2 ´ 4 apppτq
¯ η n1 ÿ
i“1 LipWpi´1qq ´ 2nη NTRF.
We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1ry ¨ fWpxqss, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Ji and Telgarsky, 2020). Our proof is based on the application of Lemma A.4 and an online-tobatch conversion argument (Cesa-Bianchi et al., 2004; Cao and Gu, 2019; Ji and Telgarsky, 2020). We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1py ¨ fWpxqqs, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Nitanda and Suzuki, 2019; Ji and Telgarsky, 2020).
Proof of Theorem 3.5. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. To apply Lemma A.4, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ.
Then by Lemma A.4, we have with probability at least 1´ δ,
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě η
n1 ÿ i“1 LipWpi´1qq ´ 2nη NTRF (A.4)
as long as Wp0q, . . . ,Wpn 1´1q P BpWp0q, τq.
We then prove Theorem 3.5 in two steps: 1) all iterates stay inside BpWp0q, τq; and 2) convergence of online SGD.
All iterates stay inside BpWp0q, τq. Similar to the proof of Theorem 3.3, we prove this part by induction. Assuming Wpiq satisfies Wpiq P BpWp0q, τq for all i ď n1 ´ 1, by (A.4), we have
}Wpn 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2nη NTRF
ď LR2 ¨m´1 ` 2nη NTRF,
where the last inequality is by W˚ P BpWp0q, Rm´1{2q. Then by triangle inequality, we further get
}Wpn 1q l ´W p0q l }F ď }W pn1q l ´W ˚ l }F ` }W˚l ´W p0q l }F
ď }Wpn 1q ´W˚}F ` }W˚l ´W p0q l }F ď Op ? LRm´1{2 `?nη NTRFq.
Then by our choices of η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ , we have }Wpn1q ´Wp0q}F ď 2 ? LRm´1{2 ď τ . This completes the proof of the first part.
Convergence of online SGD. By (A.4), we have
}Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ě η ˆ n ÿ
i“1 LipWpi´1qq ´ 2n NTRF
˙
.
Dividing by ηn on the both sides and rearranging terms, we get
1
n
n ÿ i“1 LipWpi´1qq ď }Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ηn ` 2 NTRF ď L2R2 n ` 3 NTRF,
where the second inequality follows from facts that W˚ P BpWp0q, R ¨m´1{2q and η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^L´1q ˘
. By Lemma 4.3 in (Ji and Telgarsky, 2020) and the fact that EipWpi´1qq ď LipWpi´1qq, we have
1
n
n ÿ i“1 L0´1D pW pi´1qq ď 2 n n ÿ i“1 EDpWpi´1qq
ď 8 n
n ÿ i“1 EipWpi´1qq ` 8 logp1{δq n
ď 8L 2R2 n ` 8 logp1{δq n ` 24 NTRF.
This completes the proof of the second part.
B PROOF OF RESULTS IN SECTION 4
B.1 PROOF OF PROPOSITION 4.2
We first provide the following lemma which gives an upper bound of the neural network output at the initialization. Lemma B.1 (Lemma 4.4 in Cao and Gu (2019)). Under Assumption 3.1, if m ě C̄L logpnL{δq with some absolute constant C̄, with probability at least 1´ δ, we have
|fWp0qpxiq| ď C a logpn{δq
for some absolute constant C.
Proof of Proposition 4.2. Under Assumption 4.1, we can find a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu with řL l“1 }U˚l }2F “ 1 such that yix∇fWp0qpxiq,U˚y ě m1{2γ for at least 1 ´ ρ fraction of the training data. By Lemma B.1, for all i P rns we have |fWp0qpxiq| ď C a
logpn{δq for some absolute constant C. Then for any positive constant λ, we have for at least 1´ ρ portion of the data,
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě m1{2λγ ´ C a logpn{δq.
For this fraction of data, we can set
λ “ C 1 “ log1{2pn{δq ` logp1{ q ‰
m1{2γ ,
where C 1 is an absolute constant, and get
m1{2λγ ´ C a logpn{δq ě logp1{ q.
Now we let W˚ “ Wp0q ` λU˚. By the choice of R in Proposition 4.2, we have W˚ P BpWp0q, R ¨ m´1{2q. The above inequality implies that for at least 1 ´ ρ fraction of data, we have ` `
yiFWp0q,W˚pxiq ˘ ď . For the rest data, we have
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě ´C a logpn{δq ´ λ}∇fWp0q}22 ě ´C1R
for some absolute positive constant C1, where the last inequality follows from fact that }∇fWp0q}2 “ rOpm1{2q (see Lemma A.1 for detail). Then note that we use cross-entropy loss, it follows that for this fraction of training data, we have ` `
yiFWp0q,W˚pxiq ˘ ď C2R for some constant C2. Combining the results of these two fractions of training data, we can conclude
NTRF ď n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď p1´ ρq ` ρ ¨OpRq
This completes the proof.
B.2 PROOF OF PROPOSITION 4.4
Proof of Proposition 4.4. We are going to prove that Assumption 4.3 implies the existence of a good function in the NTRF function class.
By Definition 3.2 and the definition of cross-entropy loss, our goal is to prove that there exists a collection of matrices W “ tW1,W2u satisfying maxt}W1 ´Wp0q1 }F , }W2 ´W p0q 2 }2u ď R ¨m´1{2 such that yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰
ě logp2{ q. We first consider∇W1fWp0qpxiq, which has the form
p∇W1fWp0qpxiq ˘ j “ m1{2 ¨ wp0q2,j ¨ σ 1pxwp0q1,j ,xiyq ¨ xi.
Note that wp0q2,j and w p0q 1,j are independently generated from N p0, 1{mq and N p0, 2I{mq respectively, thus we have Pp|wp0q2,j | ě 0.47m´1{2q ě 1{2. By Hoeffeding’s inequality, we know that with probability at least 1 ´ expp´m{8q, there are at least m{4 nodes, whose union is denoted by S, satisfying |wp0q2,j | ě 0.47m´1{2. Then we only focus on the nodes in the set S. Note that W p0q 1 and W p0q 2 are independently generated. Then by Assumption 4.3 and Hoeffeding’s inequality, there exists a function up¨q : Rd Ñ Rd such that with probability at least 1´ δ1,
1
|S| ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq ě γ ´
d
2 logp1{δ1q |S| .
Define vj “ upwp0q1,j q{w2,j if |w2,j | ě 0.47m´1{2 and vj “ 0 otherwise. Then we have m ÿ
j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq “ ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq
ě |S|γ ´ a 2|S| logp1{δ1q. Set δ “ 2nδ1 and apply union bound, we have with probability at least 1´ δ{2,
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ ´ a 2|S| logp2n{δq.
Therefore, note that with probability at least 1 ´ expp´m{8q, we have |S| ě m{4. Moreover, in Assumption 4.3, by yi P t˘1u and |σ1p¨q|, }up¨q}2, }xi}2 ď 1 for i “ 1, . . . , n, we see that γ ď 1. Then if m ě 32 logpn{δq{γ2, with probability at least 1´ δ{2´ exp ` ´ 4 logpn{δq{γ2 ˘
ě 1´ δ, m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ{2.
Let U “ pv1,v2, ¨ ¨ ¨ ,vmqJ{ a m|S|, we have
yix∇W1fWp0qpxiq,Uy “ 1 a
|S|
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě a |S|γ 2 ě m 1{2γ 4 ,
where the last inequality is by the fact that |S| ě m{4. Besides, note that by concentration and Gaussian tail bound, we have |fWp0qpxiq| ď C logpn{δq for some absolute constant C. Therefore, let W1 “Wp0q1 ` 4 ` logp2{ q ` C logpn{δq ˘ m´1{2U{γ and W2 “Wp0q2 , we have
yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰ ě logp2{ q. (B.1)
Note that }up¨q}2 ď 1, we have }U}F ď 1{0.47 ď 2.2. Therefore, we further have }W1 ´ W p0q 1 }F ď 8.8γ´1 ` logp2{ q ` C logpn{δq ˘
¨ m´1{2. This implies that W P BpWp0q, Rq with R “ O ` log ` n{pδ q ˘ {γ ˘
. Applying the inequality `plogp2{ qq ď on (B.1) gives `pyi ¨ FWp0q,Wpxiqq ď
for all i “ 1, . . . , n. This completes the proof.
B.3 PROOF OF PROPOSITION 4.6
Based on our theoretical analysis, the major goal is to show that there exist certain choices of R and m such that the best NTRF model in the function class FpWp0q, Rq can achieve training error. In this proof, we will prove a stronger results by showing that given the quantities of R and m specificed in Proposition 4.6, there exists a NTRF model with parameter W˚ that satisfies n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . In order to do so, we consider training the NTRF model via a different surrogate loss function. Specifically, we consider squared hinge loss r`pxq “ ` maxtλ´ x, 0u ˘2
, where λ denotes the target margin. In the later proof, we choose λ “ logp1{ q ` 1 such that the condition r`pxq ď 1 can guarantee that x ě logp q. Moreover, we consider using gradient flow, i.e., gradient descent with infinitesimal step size, to train the NTRF model. Therefore, in the remaining part of the proof, we consider optimizing the NTRF parameter W with the loss function
rLSpWq “ 1
n
n ÿ
i“1
r` ` yiFWp0q,Wpxiq ˘ .
Moreover, for simplicity, we only consider optimizing parameter in the last hidden layer (i.e., WL´1). Then the gradient flow can be formulated as
dWL´1ptq dt “ ´∇WL´1 rLSpWptqq, dWlptq dt “ 0 for any l ‰ L´ 1.
Note that the NTRF model is a linear model, thus by Definition 3.2, we have
∇WL´1 rLSpWptqq “ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇WL´1FWp0q,Wptqpxiq
“ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇ W p0q L´1 fWp0qpxiq. (B.2)
Then it is clear that∇WL´1 rLSpWptqq has fixed direction throughout the optimization. In order to prove the convergence of gradient flow and characterize the quantity of R, We first provide the following lemma which gives an upper bound of the NTRF model output at the initialization.
Then we provide the following lemma which characterizes a lower bound of the Frobenius norm of the partial gradient∇WL´1 rLSpWq.
Lemma B.2 (Lemma B.5 in Zou et al. (2019)). Under Assumptions 3.1 and 4.5, if m “ rΩpn2φ´1q, then for all t ě 0, with probability at least 1´ exp ` ´Opmφ{nq ˘
, there exist a positive constant C such that
}∇WL´1 rLSpWptqq}2F ě Cmφ
n5
„ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
2
.
We slightly modified the original version of this lemma since we use different models (we consider NTRF model while Zou et al. (2019) considers neural network model). However, by (B.2), it is clear that the gradient ∇rLSpWq can be regarded as a type of the gradient for neural network model at the initialization (i.e.,∇WL´1LSpWp0qq) is valid. Now we are ready to present the proof.
Proof of Proposition 4.6. Recall that we only consider training the last hidden weights, i.e., WL´1, via gradient flow with squared hinge loss, and our goal is to prove that gradient flow is able to find a NTRF model within the function class FpWp0q, Rq around the initialization, i.e., achieving n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Let Wptq be the weights at time t, gradient flow implies that
drLSpWptqq dt “ ´}∇WL´1 rLSpWptqq}2F ď ´ Cmφ n5
ˆ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
˙2
“ 4Cmφ rLSpWptqq n3 ,
where the first equality is due to the fact that we only train the last hidden layer, the first inequality is by Lemma B.2 and the second equality follows from the fact that r`1p¨q “ ´2 b
r`p¨q. Solving the above inequality gives
rLSpWptqq ď rLSpWp0qq ¨ exp ˆ ´ 4Cmφt n3 ˙ . (B.3)
Then, set T “ O ` n3m´1φ´1 ¨ logprLSpWp0qq{ 1q ˘ and 1 “ 1{n, we have rLSpWptqq ď 1. Then it follows that r` `
yiFWp0q,Wptqpxiq ˘ ď 1, which implies that yiFWp0q,Wptqpxiq ě logp q and thus n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Therefore, WpT q is exactly the NTRF model we are looking for.
The next step is to characterize the distance between WpT q and Wp0q in order to characterize the quantity of R. Note that }∇WL´1 rLSpWptqq}2F ě 4CmφrLSpWptqq{n3, we have
d
b
rLSpWptqq dt “ ´ }∇WL´1 rLSpWptqq}2F
2
b rLSpWptqq ď ´}∇WL´1 rLSpWptqq}F ¨
C1{2m1{2φ1{2
n3{2 .
Taking integral on both sides and rearranging terms, we have ż T
t“0 }∇WL´1 rLSpWptqq}Fdt ď
n3{2 C1{2m1{2φ1{2 ¨ ˆ b rLSpWp0qq ´ b rLSpWptqq ˙ .
Note that the L.H.S. of the above inequality is an upper bound of }Wptq ´Wp0q}F , we have for any t ě 0,
}Wptq ´Wp0q}F ď n3{2 C1{2m1{2φ1{2 ¨ b rLSpWp0qq “ O ˆ n3{2 log ` n{pδ q ˘ m1{2φ1{2 ˙ ,
where the second inequality is by Lemma B.1 and our choice of λ “ logp1{ q ` 1. This implies that there exists a point W˚ within the class FpWp0q, Rq with
R “ O ˆ n3{2 log ` n{pδ q ˘
φ1{2
˙
such that
NTRF :“ n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď .
Then by Theorem 3.3, and, more specifically, (A.1), we can compute the minimal required neural network width as follows,
m “ rΩpR8L22q “ rΩ ˜ L22n12
φ4
¸
.
This completes the proof.
C PROOF OF TECHNICAL LEMMAS
Here we provide the proof of Lemmas 5.1, A.3 and A.4.
C.1 PROOF OF LEMMA 5.1
The detailed proof of Lemma 5.1 is given as follows.
Proof of Lemma 5.1. Based on the update rule of gradient descent, i.e., Wpt`1q “ Wptq ´ η∇WLSpWptqq, we have the following calculation.
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
“ 2η n
n ÿ i“1 xWptq ´W˚,∇WLipWptqqy loooooooooooooooooooooomoooooooooooooooooooooon
I1
´ η2 L ÿ
l“1 }∇WlLSpWptqq}2F loooooooooooooomoooooooooooooon
I2
, (C.1)
where the equation follows from the fact that LSpWptqq “ n´1 řn i“1 LipWptqq. In what follows, we first bound the term I1 on the R.H.S. of (C.1) by approximating the neural network functions with linear models. By assumption, for t “ 0, . . . , t1 ´ 1, Wptq,W˚ P BpWp0q, τq. Therefore by the definition of apppτq,
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ fW˚pxiq ˘ ` apppτq (C.2)
Moreover, we also have
0 ď yi ¨ ` fW˚pxiq ´ fWp0qpxiq ´ x∇fWp0qpxiq,W˚ ´Wp0qy ˘ ` apppτq “ yi ¨ ` fW˚pxiq ´ FWp0q,W˚pxiq ˘ ` apppτq, (C.3)
where the equation follows by the definition of FWp0q,W˚pxq. Adding (C.3) to (C.2) and canceling the terms yi ¨ fW˚pxiq, we obtain that
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ FWp0q,W˚pxiq ˘ ` 2 apppτq. (C.4)
We can now give a lower bound on first term on the R.H.S. of (C.1). For i “ 1, . . . , n, applying the chain rule on the loss function gradients and utilizing (C.4), we have
xWptq ´W˚,∇WLipWptqqy “ `1 ` yifWptqpxiq ˘ ¨ yi ¨ xWptq ´W˚,∇WfWptqpxiqy ě `1 `
yifWptqpxiq ˘ ¨ ` yifWptqpxiq ´ yifW˚pxiq ` 2 apppτq ˘
ě p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘ , (C.5)
where the first inequality is by the fact that `1 ` yifWptqpxiq ˘ ă 0, the second inequality is by convexity of `p¨q and the fact that ´`1 `
yifWptqpxiq ˘ ď ` ` yifWptqpxiq ˘ .
We now proceed to bound the term I2 on the R.H.S. of (C.1). Note that we have `1p¨q ă 0, and therefore the Frobenius norm of the gradient∇WlLSpWptqq can be upper bounded as follows,
}∇WlLSpWptqq}F “ › › ›
›
1
n
n ÿ i“1 `1 ` yifWptqpxiq ˘ ∇WlfWptqpxiq › › › › F
ď 1 n
n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ¨ }∇WlfWptqpxiq}F ,
where the inequality follows by triangle inequality. We now utilize the fact that cross-entropy loss satisfies the inequalities ´`1p¨q ď `p¨q and ´`1p¨q ď 1. Therefore by definition of Mpτq, we have
L ÿ l“1 }∇WlLSpWptqq}2F ď O ` LMpτq2 ˘ ¨ ˆ 1 n n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ˙2
ď O ` LMpτq2 ˘ ¨ LSpWptqq. (C.6)
Then we can plug (C.5) and (C.6) into (C.1) and obtain
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
ě 2η n
n ÿ
i“1
”
p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘
ı
´O ` η2LMpτq2 ˘ ¨ LSpWptqq
ě „ 3
2 ´ 4 apppτq
ηLSpWptqq ´ 2η
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ ,
where the last inequality is by η “ OpL´1Mpτq´2q and merging the third term on the second line into the first term. Taking telescope sum from t “ 0 to t “ t1 ´ 1 and plugging in the definition 1 n řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF completes the proof.
C.2 PROOF OF LEMMA A.3
Proof of Lemma A.3. We first denote W “ BpWp0q, rR ¨ m´1{2q, and define the corresponding neural network function class and surrogate loss function class as F “ tfWpxq : W P Wu and G “ t´`ry ¨ fWpxqs : W PWu respectively. By standard uniform convergence results in terms of empirical Rademacher complexity (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), with probability at least 1´ δ we have
sup WPW |ESpWq ´ EDpWq| “ sup WPW
ˇ
ˇ
ˇ
ˇ
ˇ ´ 1 n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ ` Epx,yq„D`1 “ y ¨ fWpxq ‰
ˇ
ˇ
ˇ
ˇ
ˇ
ď 2pRnpGq ` C1
c
logp1{δq n ,
where C1 is an absolute constant, and
pRnpGq “ Eξi„Unifpt˘1uq
#
sup WPW
1
n
n ÿ i“1 ξi` 1“yi ¨ fWpxiq ‰
+
is the empirical Rademacher complexity of the function class G. We now provide two bounds on pRnpGq, whose combination gives the final result of Lemma A.3. First, by Corollary 5.35 in (Vershynin, 2010), with probability at least 1 ´ L ¨ expp´Ωpmqq, }Wp0ql }2 ď 3 for all l P rLs. Therefore for all W PW , we have }Wl}2 ď 4. Moreover, standard concentration inequalities on the norm of the first row of Wp0ql also implies that }Wl}2 ě 0.5 for all W PW and l P rLs. Therefore, an adaptation of the bound in (Bartlett et al., 2017)¶ gives
pRnpFq ď rO ˜
sup WPW
# m1{2? n ¨ « L ź
l“1 }Wl}2
ff ¨ « L ÿ
l“1
}WJl ´W p0qJ l } 2{3 2,1
}Wl}2{32
ff3{2+¸
ď rO ˜
sup WPW
#
4Lm1{2? n ¨ « L ÿ l“1 p ? m ¨ }WJl ´W p0qJ l }F q 2{3
ff3{2+¸
ď rO ˜ 4LL3{2 rR ¨ c m
n
¸
. (C.7)
We now derive the second bound on pRnpGq, which is inspired by the proof provided in (Cao and Gu, 2020). Since y P t`1, 1u, |`1pzq| ď 1 and `1pzq is 1-Lipschitz continuous, by standard empirical Rademacher complexity bounds (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), we have
pRnpGq ď pRnpFq “ Eξi„Unifpt˘1uq
«
sup WPW
1
n
n ÿ i“1 ξifWpxiq
ff
,
¶Bartlett et al. (2017) only proved the Rademacher complexity bound for the composition of the ramp loss and the neural network function. In our setting essentially the ramp loss is replaced with the ´`1p¨q function, which is bounded and 1-Lipschitz continuous. The proof in our setting is therefore exactly the same as the proof given in (Bartlett et al., 2017), and we can apply Theorem 3.3 and Lemma A.5 in (Bartlett et al., 2017) to obtain the desired bound we present here.
where pRnpFq is the empirical Rademacher complexity of the function class F . We have
pRnrFs ď Eξ
#
sup WPW
1
n
n ÿ i“1 ξi “ fWpxiq ´ FWp0q,Wpxiq ‰
+
looooooooooooooooooooooooooooomooooooooooooooooooooooooooooon
I1
`Eξ
#
sup WPW
1
n
n ÿ i“1 ξiFWp0q,Wpxiq
+
looo | 1. What is the focus of the paper regarding neural networks and their training?
2. What are the strengths of the proposed approach, particularly in terms of theoretical analysis and guarantees?
3. Do you have any concerns or questions about the applicability of the theory and its limitations?
4. How does the reviewer assess the significance of the extension from shallow to deep networks?
5. Are there any questions regarding the generalization bounds and their implications for separable data? | Review | Review
The paper extends an existing proof for the sufficiency of polylogarithmic width for sharp learning guarantees of ReLU networks trained by (stochastic) gradient descent from shallow networks to deep networks. The theoretical analysis links the convergence of GD and SGD to the width of the network. The paper shows that polylogarithmic width is enough to give reasonable guarantees also for deep neural networks. It furthermore provides a generalisation bound in terms of network width.
The paper states a clearly formulated contribution and provides rigorous proofs for the claims stated. The analysis highly relies on the radius R of the NTRF function class and the authors provide a discussion showing the connection of the requirements on data separability to the values that R can take. The main lemma behind the proof and the sketch of the proof is also presented and explained.
The paper lacks a discussion of the applicability of the theory (e.g., how limiting are the assumptions of the equal width layers, normalization by square root of m and the input norms being bounded by 1?), including a discussion on how the results could be generalized would be required. Also, since the paper extends a result on shallow networks to deep networks it should be discussed how depth affects the results. The generalization guarantees provided in this paper are vacuous without strict assumptions on data separability, as the authors state. However, it remains unclear whether the guarantee is non-vacuous for separable data. Could the authors argue whether the bound is non-vacuous for separable data?
But overall the contribution deems to be interesting still, so I suggest to accept the paper. |
ICLR | Title
How Much Over-parameterization Is Sufficient to Learn Deep ReLU Networks?
Abstract
A recent line of research on deep learning focuses on the extremely overparameterized setting, and shows that when the network width is larger than a high degree polynomial of the training sample size n and the inverse of the target error ́1, deep neural networks learned by (stochastic) gradient descent enjoy nice optimization and generalization guarantees. Very recently, it is shown that under certain margin assumptions on the training data, a polylogarithmic width condition suffices for two-layer ReLU networks to converge and generalize (Ji and Telgarsky, 2020). However, whether deep neural networks can be learned with such a mild over-parameterization is still an open question. In this work, we answer this question affirmatively and establish sharper learning guarantees for deep ReLU networks trained by (stochastic) gradient descent. In specific, under certain assumptions made in previous work, our optimization and generalization guarantees hold with network width polylogarithmic in n and ́1. Our results push the study of over-parameterized deep neural networks towards more practical settings.
1 INTRODUCTION
Deep neural networks have become one of the most important and prevalent machine learning models due to their remarkable power in many real-world applications. However, the success of deep learning has not been well-explained in theory. It remains mysterious why standard optimization algorithms tend to find a globally optimal solution, despite the highly non-convex landscape of the training loss function. Moreover, despite the extremely large amount of parameters, deep neural networks rarely over-fit, and can often generalize well to unseen data and achieve good test accuracy. Understanding these mysterious phenomena on the optimization and generalization of deep neural networks is one of the most fundamental problems in deep learning theory.
Recent breakthroughs have shed light on the optimization and generalization of deep neural networks (DNNs) under the over-parameterized setting, where the hidden layer width is extremely large (much larger than the number of training examples). It has been shown that with the standard random initialization, the training of over-parameterized deep neural networks can be characterized by a kernel function called neural tangent kernel (NTK) (Jacot et al., 2018; Arora et al., 2019b). In the neural tangent kernel regime (or lazy training regime (Chizat et al., 2019)), the neural network function behaves similarly as its first-order Taylor expansion at initialization (Jacot et al., 2018; Lee et al., 2019; Arora et al., 2019b; Cao and Gu, 2019), which enables feasible optimization and generalization analysis. In terms of optimization, a line of work (Du et al., 2019b; Allen-Zhu et al., 2019b; Zou et al., 2019; Zou and Gu, 2019) proved that for sufficiently wide neural networks, (stochastic) gradient descent (GD/SGD) can successfully find a global optimum of the training loss function. For generalization, Allen-Zhu et al. (2019a); Arora et al. (2019a); Cao and Gu (2019) established generalization bounds of neural networks trained with (stochastic) gradient descent, and showed that the neural networks can learn target functions in certain reproducing kernel Hilbert space (RKHS) or the corresponding random feature function class.
Although existing results in the neural tangent kernel regime have provided important insights into the learning of deep neural networks, they require the neural network to be extremely wide. ∗Equal contribution.
The typical requirement on the network width is a high degree polynomial of the training sample size n and the inverse of the target error ´1. As there still remains a huge gap between such network width requirement and the practice, many attempts have been made to improve the overparameterization condition under various conditions on the training data and model initialization (Oymak and Soltanolkotabi, 2019; Zou and Gu, 2019; Kawaguchi and Huang, 2019; Bai and Lee, 2019). For two-layer ReLU networks, a recent work (Ji and Telgarsky, 2020) showed that when the training data are well separated, polylogarithmic width is sufficient to guarantee good optimization and generalization performances. However, their results cannot be extended to deep ReLU networks since their proof technique largely relies on the fact that the network model is 1-homogeneous, which cannot be satisfied by DNNs. Therefore, whether deep neural networks can be learned with such a mild over-parameterization is still an open problem.
In this paper, we resolve this open problem by showing that polylogarithmic network width is sufficient to learn DNNs. In particular, unlike the existing works that require the DNNs to behave very close to a linear model (up to some small approximation error), we show that a constant linear approximation error is sufficient to establish nice optimization and generalization guarantees for DNNs. Thanks to the relaxed requirement on the linear approximation error, a milder condition on the network width and tighter bounds on the convergence rate and generalization error can be proved. We summarize our contributions as follows:
• We establish the global convergence guarantee of GD for training deep ReLU networks based on the so-called NTRF function class (Cao and Gu, 2019), a set of linear functions over random features. Specifically, we prove that GD can learn deep ReLU networks with width m “ polypRq to compete with the best function in NTRF function class, where R is the radius of the NTRF function class.
• We also establish the generalization guarantees for both GD and SGD in the same setting. Specifically, we prove a diminishing statistical error for a wide range of network width m P prΩp1q,8q, while most of the previous generalization bounds in the NTK regime only works in the setting where the network widthm is much greater than the sample size n. Moreover, we establish rOp ´2q rOp ´1q sample complexities for GD and SGD respectively, which are tighter than existing bounds for learning deep ReLU networks (Cao and Gu, 2019), and match the best results when reduced to the two-layer cases (Arora et al., 2019b; Ji and Telgarsky, 2020).
• We further generalize our theoretical analysis to the scenarios with different data separability assumptions in the literature. We show if a large fraction of the training data are well separated, the best function in the NTRF function class with radius R “ rOp1q can learn the training data with error up to . This together with our optimization and generalization guarantees immediately suggests that deep ReLU networks can be learned with network width m “ rΩp1q, which has a logarithmic dependence on the target error and sample size n. Compared with existing results (Cao and Gu, 2020; Ji and Telgarsky, 2020) which require all training data points to be separated in the NTK regime, our result is stronger since it allows the NTRF function class to misclassify a small proportion of the training data.
For the ease of comparison, we summarize our results along with the most related previous results in Table 1, in terms of data assumption, the over-parameterization condition and sample complexity. It can be seen that under data separation assumption (See Sections 4.1, 4.2), our result improves existing results for learning deep neural networks by only requiring a polylogpn, ´1q network width. Notation. For two scalars a and b, we denote a^ b “ minta, bu. For a vector x P Rd we use }x}2 to denote its Euclidean norm. For a matrix X, we use }X}2 and }X}F to denote its spectral norm and Frobenius norm respectively, and denote by Xij the entry of X at the i-th row and j-th column. Given two matrices X and Y with the same dimension, we denote xX,Yy “ ř
i,jXijYij .
Given a collection of matrices W “ tW1, ¨ ¨ ¨ ,WLu P bLl“1Rmlˆm 1 l and a function fpWq over bLl“1Rmlˆm 1 l , we define by∇WlfpWq the partial gradient of fpWq with respect to Wl and denote ∇WfpWq “ t∇WlfpWquLl“1. We also denote BpW, τq “ W1 : maxlPrLs }W1l ´Wl}F ď τ ( for τ ě 0. For two collection of matrices A “ tA1, ¨ ¨ ¨ ,Anu, B “ tB1, ¨ ¨ ¨ ,Bnu, we denote xA,By “
řn i“1xAi,Biy and }A}2F “ řn i“1 }Ai}2F .
Algorithm 1 Gradient descent with random initialization Input: Number of iterations T , step size η, training set S “ tpxi, yiqni“1u, initialization Wp0q for t “ 1, 2, . . . , T do
Update Wptq “Wpt´1q ´ η ¨∇WLSpWpt´1qq. end for Output: Wp0q, . . . ,WpT q.
Given two sequences txnu and tynu, we denote xn “ Opynq if |xn| ď C1|yn| for some absolute positive constant C1, xn “ Ωpynq if |xn| ě C2|yn| for some absolute positive constant C2, and xn “ Θpynq if C3|yn| ď |xn| ď C4|yn| for some absolute constants C3, C4 ą 0. We also use rOp¨q, rΩp¨q to hide logarithmic factors inOp¨q and Ωp¨q respectively. Additionally, we denote xn “ polypynq if xn “ OpyDn q for some positive constant D, and xn “ polylogpynq if xn “ polyplogpynqq.
2 PRELIMINARIES ON LEARNING NEURAL NETWORKS
In this section, we introduce the problem setting in this paper, including definitions of the neural network and loss functions, and the training algorithms, i.e., GD and SGD with random initialization.
Neural network function. Given an input x P Rd, the output of deep fully-connected ReLU network is defined as follows,
fWpxq “ m1{2WLσpWL´1 ¨ ¨ ¨σpW1xq ¨ ¨ ¨ q, where W1 P Rmˆd, W2, ¨ ¨ ¨ ,WL´1 P Rmˆm, WL P R1ˆm, and σpxq “ maxt0, xu is the ReLU activation function. Here, without loss of generality, we assume the width of each layer is equal to m. Yet our theoretical results can be easily generalized to the setting with unequal width layers, as long as the smallest width satisfies our overparameterization condition. We denote the collection of all weight matrices as W “ tW1, . . . ,WLu. Loss function. Given training dataset txi, yiui“1,...,n with input xi P Rd and output yi P t´1,`1u, we define the training loss function as
LSpWq “ 1
n
n ÿ i“1 LipWq,
where LipWq “ ` ` yifWpxiq ˘ “ log ` 1` expp´yifWpxiqq ˘ is defined as the cross-entropy loss.
Algorithms. We consider both GD and SGD with Gaussian random initialization. These two algorithms are displayed in Algorithms 1 and 2 respectively. Specifically, the entries in Wp0q1 , ¨ ¨ ¨ ,W p0q L´1 are generated independently from univariate Gaussian distributionNp0, 2{mq and the entries in Wp0qL are generated independently from Np0, 1{mq. For GD, we consider using the full gradient to update the model parameters. For SGD, we use a new training data point in each iteration.
Note that our initialization method in Algorithms 1, 2 is the same as the widely used He initialization (He et al., 2015). Our neural network parameterization is also consistent with the parameterization used in prior work on NTK (Jacot et al., 2018; Allen-Zhu et al., 2019b; Du et al., 2019a; Arora et al., 2019b; Cao and Gu, 2019).
Algorithm 2 Stochastic gradient desecent (SGD) with random initialization Input: Number of iterations n, step size η, initialization Wp0q for i “ 1, 2, . . . , n do
Draw pxi, yiq from D and compute the corresponding gradient∇WLipWpi´1qq. Update Wpiq “Wpi´1q ´ η ¨∇WLipWpi´1qq.
end for Output: Randomly choose xW uniformly from tWp0q, . . . ,Wpn´1qu.
3 MAIN THEORY
In this section, we present the optimization and generalization guarantees of GD and SGD for learning deep ReLU networks. We first make the following assumption on the training data points. Assumption 3.1. All training data points satisfy }xi}2 “ 1, i “ 1, . . . , n.
This assumption has been widely made in many previous works (Allen-Zhu et al., 2019b;c; Du et al., 2019b;a; Zou et al., 2019) in order to simplify the theoretical analysis. This assumption can be relaxed to be upper bounded and lower bounded by some constant.
In the following, we give the definition of Neural Tangent Random Feature (NTRF) (Cao and Gu, 2019), which characterizes the functions learnable by over-parameterized ReLU networks.
Definition 3.2 (Neural Tangent Random Feature, (Cao and Gu, 2019)). Let Wp0q be the initialization weights, and FWp0q,Wpxq “ fWp0qpxq ` x∇fWp0qpxq,W ´Wp0qy be a function with respect to the input x. Then the NTRF function class is defined as follows
FpWp0q, Rq “ FWp0q,Wp¨q : W P BpWp0q, R ¨m´1{2q ( .
The function class FWp0q,Wpxq consists of linear models over random features defined based on the network gradients at the initialization. Therefore it captures the key “almost linear” property of wide neural networks in the NTK regime (Lee et al., 2019; Cao and Gu, 2019). In this paper, we use the NTRF function class as a reference class to measure the difficulty of a learning problem. In what follows, we deliver our main theoretical results regarding the optimization and generalization guarantees of learning deep ReLU networks. We study both GD and SGD with random initialization (presented in Algorithms 1 and 2).
3.1 GRADIENT DESCENT
The following theorem establishes the optimization guarantee of GD for training deep ReLU networks for binary classification. Theorem 3.3. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ over the initialization, GD with step size η “ ΘpL´1m´1q can train a neural network to achieve at most 3 NTRF training loss within T “ O `
L2R2 ´1NTRF ˘ iterations.
Theorem 3.3 shows that the deep ReLU network trained by GD can compete with the best function in the NTRF function class FpWp0q, Rq if the network width has a polynomial dependency in R and L and a logarithmic dependency in n and 1{δ. Moreover, if the NTRF function class with R “ rOp1q can learn the training data well (i.e., NTRF is less than a small target error ), a polylogarithmic (in terms of n and ´1) network width suffices to guarantee the global convergence of GD, which directly improves over-paramterization condition in the most related work (Cao and Gu, 2019). Besides, we remark here that this assumption on the NTRF function class can be easily satisfied when the training data admits certain separability conditions, which we discuss in detail in Section 4.
Compared with the results in (Ji and Telgarsky, 2020) which give similar network width requirements for two-layer networks, our result works for deep networks. Moreover, while Ji and Telgarsky (2020)
essentially required all training data to be separable by a function in the NTRF function class with a constant margin, our result does not require such data separation assumptions, and allows the NTRF function class to misclassify a small proportion of the training data points∗.
We now characterize the generalization performance of neural networks trained by GD. We denote L0´1D pWq “ Epx,yq„Dr1tfWpxq ¨ y ă 0us as the expected 0-1 loss (i.e., expected error) of fWpxq. Theorem 3.4. Under the same assumptions as Theorem 3.3, with probability at least 1´δ, the iterate Wptq of Algorithm 1 satisfies that
L0´1D pW ptqq ď 2LSpWptqq ` rO
˜
4LL2R
c
m n ^ ˜ L3{2R? n ` L 11{3R4{3 m1{6 ¸¸ `O ˜ c logp1{δq n ¸
for all t “ 0, . . . , T .
Theorem 3.4 shows that the test error of the trained neural network can be bounded by its training error plus statistical error terms. Note that the statistical error terms is in the form of a minimum between two terms 4LL2R a m{n and L3{2R{ ? n` L11{3R4{3{m1{6. Depending on the network width m, one of these two terms will be the dominating term and diminishes for large n: (1) if m “ opnq, the statistical error will be 4LL2R a
m{n, and diminishes as n increases; and (2) if m “ Ωpnq, the statistical error is L3{2R{ ? n` L11{3R4{3{m1{6, and again goes to zero as n increases. Moreover,
in this paper we have a specific focus on the setting m “ rOp1q, under which Theorem 3.4 gives a statistical error of order rOpn´1{2q. This distinguishes our result from previous generalization bounds for deep networks (Cao and Gu, 2020; 2019), which cannot be applied to the setting m “ rOp1q. We note that for two-layer ReLU networks (i.e., L “ 2) Ji and Telgarsky (2020) proves a tighter rOp1{n1{2q generalization error bound regardless of the neural networks width m, while our result (Theorem 3.4), in the two-layer case, can only give rOp1{n1{2q generalization error bound when m “ rOp1q or m “ rΩpn3q. However, different from our proof technique that basically uses the (approximated) linearity of the neural network function, their proof technique largely relies on the 1-homogeneous property of the neural network, which restricted their theory in two-layer cases. An interesting research direction is to explore whether a rOp1{n1{2q generalization error bound can be also established for deep networks (regardless of the network width), which we will leave it as a future work.
3.2 STOCHASTIC GRADIENT DESCENT
Here we study the performance of SGD for training deep ReLU networks. The following theorem establishes a generalization error bound for the output of SGD. Theorem 3.5. For δ,R ą 0, let NTRF “ infFPFpWp0q,Rq n´1 řn i“1 `ryiF pxiqs be the minimum training loss achievable by functions in FpWp0q, Rq. Then there exists
m˚pδ,R, Lq “ rO ` polypR,Lq ¨ log4{3pn{δq ˘ ,
such that if m ě m˚pδ,R, Lq, with probability at least 1 ´ δ, SGD with step size η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ achieves
ErL0´1D pxWqs ď 8L2R2 n ` 8 logp2{δq n ` 24 NTRF,
where the expectation is taken over the uniform draw of xW from tWp0q, . . . ,Wpn´1qu.
For any ą 0, Theorem 3.5 gives a rOp ´1q sample complexity for deep ReLU networks trained with SGD to achieveOp NTRF` q test error. Our result extends the result for two-layer networks proved in (Ji and Telgarsky, 2020) to multi-layer networks. Theorem 3.5 also provides sharper results compared with Allen-Zhu et al. (2019a); Cao and Gu (2019) in two aspects: (1) the sample complexity is improved from n “ rOp ´2q to n “ rOp ´1q; and (2) the overparamterization condition is improved from m ě polyp ´1q to m “ rΩp1q. ∗A detailed discussion is given in Section 4.2.
4 DISCUSSION ON THE NTRF CLASS
Our theoretical results in Section 3 rely on the radius (i.e.,R) of the NTRF function classFpWp0q, Rq and the minimum training loss achievable by functions in FpWp0q, Rq, i.e., NTRF. Note that a larger R naturally implies a smaller NTRF, but also leads to worse conditions on m. In this section, for any (arbitrarily small) target error rate ą 0, we discuss various data assumptions studied in the literature under which our results can lead to Op q training/test errors, and specify the network width requirement.
4.1 DATA SEPARABILITY BY NEURAL TANGENT RANDOM FEATURE
In this subsection, we consider the setting where a large fraction of the training data can be linearly separated by the neural tangent random features. The assumption is stated as follows.
Assumption 4.1. There exists a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu satisfying řL l“1 }U˚l }2F “ 1, such that for at least p1´ ρq fraction of training data we have
yix∇fWp0qpxiq,U˚y ě m1{2γ,
where γ is an absolute positive constant† and ρ P r0, 1q.
The following corollary provides an upper bound of NTRF under Assumption 4.1 for some R.
Proposition 4.2. Under Assumption 4.1, for any , δ ą 0, if R ě C “ log1{2pn{δq ` logp1{ q ‰
{γ for some absolute constant C, then with probability at least 1´ δ,
NTRF :“ inf FPFpWp0q,Rq
n´1 n ÿ
i“1 ` ` yiF pxiq ˘ ď ` ρ ¨OpRq.
Proposition 4.2 covers the setting where the NTRF function class is allowed to misclassify training data, while most of existing work typically assumes that all training data can be perfectly separated with constant margin (i.e., ρ “ 0) (Ji and Telgarsky, 2020; Shamir, 2020). Our results show that for sufficiently small misclassification ratio ρ “ Op q, we have NTRF “ rOp q by choosing the radius parameter R logarithimic in n, δ´1, and ´1. Substituting this result into Theorems 3.3, 3.4 and 3.5, it can be shown that a neural network with width m “ polypL, logpn{δq, logp1{ qq ˘ suffices to guarantee good optimization and generalization performances for both GD and SGD. Consequently, we can obtain that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.2 DATA SEPARABILITY BY SHALLOW NEURAL TANGENT MODEL
In this subsection, we study the data separation assumption made in Ji and Telgarsky (2020) and show that our results cover this particular setting. We first restate the assumption as follows.
Assumption 4.3. There exists up¨q : Rd Ñ Rd and γ ě 0 such that }upzq}2 ď 1 for all z P Rd, and
yi
ż
Rd σ1pxz,xiyq ¨ xupzq,xiydµNpzq ě γ
for all i P rns, where µN p¨q denotes the standard normal distribution.
Assumption 4.3 is related to the linear separability of the gradients of the first layer parameters at random initialization, where the randomness is replaced with an integral by taking the infinite width limit. Note that similar assumptions have also been studied in (Cao and Gu, 2020; Nitanda and Suzuki, 2019; Frei et al., 2019). The assumption made in (Cao and Gu, 2020; Frei et al., 2019) uses gradients with respect to the second layer weights instead of the first layer ones. In the following, we mainly focus on Assumption 4.3, while our result can also be generalized to cover the setting in (Cao and Gu, 2020; Frei et al., 2019).
†The factor m1{2 is introduced here since }∇Wp0qfpxiq}F is typically of order Opm 1{2q.
In order to make a fair comparison, we reduce our results for multilayer networks to the two-layer setting. In this case, the neural network function takes form
fWpxq “ m1{2W2σpW1xq. Then we provide the following proposition, which states that Assumption 4.3 implies a certain choice of R “ rOp1q such the the minimum training loss achieved by the function in the NTRF function class FpWp0q, Rq satisfies NTRF “ Op q, where is the target error. Proposition 4.4. Suppose the training data satisfies Assumption 4.3. For any , δ ą 0, let R “ C “ logpn{δq ` logp1{ q ‰
{γ for some large enough absolute constant C. If the neural network width satisfies m “ Ω ` logpn{δq{γ2 ˘
, then with probability at least 1 ´ δ, there exist FWp0q,Wpxiq P FpWp0q, Rq such that ` `
yi ¨ FWp0q,Wpxiq ˘ ď ,@i P rns.
Proposition 4.4 shows that under Assumption 4.3, there exists FWp0q,Wp¨q P FpWp0q, Rq with R “ rOp1{γq such that the cross-entropy loss of FWp0q,Wp¨q at each training data point is bounded by . This implies that NTRF ď . Moreover, by applying Theorem 3.3 with L “ 2, the condition on the neural network width becomes m “ rΩp1{γ8q‡, which matches the results proved in Ji and Telgarsky (2020). Moreover, plugging these results on m and NTRF into Theorems 3.4 and 3.5, we can conclude that the bounds on the test error for GD and SGD are rOpn´1{2q and rOpn´1q respectively.
4.3 CLASS-DEPENDENT DATA NONDEGENERATION
In previous subsections, we have shown that under certain data separation conditions NTRF can be sufficiently small while the corresponding NTRF function class has R of order rOp1q. Thus neural networks with polylogarithmic width enjoy nice optimization and generalization guarantees. In this part, we consider the following much milder data separability assumption made in Zou et al. (2019). Assumption 4.5. For all i ‰ i1 if yi ‰ yi1 , then }xi ´ xj}2 ě φ for some absolute constant φ.
In contrast to the conventional data nondegeneration assumption (i.e., no duplicate data points) made in Allen-Zhu et al. (2019b); Du et al. (2019b;a); Zou and Gu (2019)§, Assumption 4.5 only requires that the data points from different classes are nondegenerate, thus we call it class-dependent data nondegeneration.
We have the following proposition which shows that Assumption 4.5 also implies the existence of a good function that achieves training error, in the NTRF function class with a certain choice of R. Proposition 4.6. Under Assumption 4.5, if
R “ Ω ` n3{2φ´1{2 logpnδ´1 ´1q ˘ , m “ rΩ ` L22n12φ´4 ˘ ,
we have NTRF ď with probability at least 1´ δ.
Proposition 4.6 suggests that under Assumption 4.5, in order to guarantee NTRF ď , the size of NTRF function class needs to be Ωpn3{2q. Plugging this into Theorems 3.4 and 3.5 leads to vacuous bounds on the test error. This makes sense since Assumption 4.5 basically covers the “random label” setting, which is impossible to be learned with small generalization error. Moreover, we would like to point out our theoretical analysis leads to a sharper over-parameterization condition than that proved in Zou et al. (2019), i.e., m “ rΩ ` n14L16φ´4`n12L16φ´4 ´1 ˘
, if the network depth satisfies L ď rOpn1{3 _ ´1{6q.
5 PROOF SKETCH OF THE MAIN THEORY
In this section, we introduce a key technical lemma in Section 5.1, based on which we provide a proof sketch of Theorems 3.3. The full proof of all our results can be found in the appendix. ‡We have shown in the proof of Theorem 3.3 that m “ rΩpR8q (see (A.1) for more detail). §Specifically, Allen-Zhu et al. (2019b); Zou and Gu (2019) require that any two data points (rather than data points from different classes) are separated by a positive distance. Zou and Gu (2019) shows that this assumption is equivalent to those made in Du et al. (2019b;a), which require that the composite kernel matrix is strictly positive definite.
5.1 A KEY TECHNICAL LEMMA
Here we introduce a key technical lemma used in the proof of Theorem 3.3.
Our proof is based on the key observation that near initialization, the neural network function can be approximated by its first-order Taylor expansion. In the following, we first give the definition of the linear approximation error in a τ -neighborhood around initialization.
apppτq :“ sup i“1,...,n sup W1,WPBpWp0q,τq
ˇ ˇfW1pxiq ´ fWpxiq ´ x∇fWpxiq,W1 ´Wy ˇ ˇ.
If all the iterates of GD stay inside a neighborhood around initialization with small linear approximation error, then we may expect that the training of neural networks should be similar to the training of the corresponding linear model, where standard optimization techniques can be applied. Motivated by this, we also give the following definition on the gradient upper bound of neural networks around initialization, which is related to the Lipschitz constant of the optimization objective function.
Mpτq :“ sup i“1,...,n sup l“1,...,L sup WPBpWp0q,τq }∇WlfWpxiq}F .
By definition, we can choose W˚ P BpWp0q, Rm´1{2q such that n´1 řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. Then we have the following lemma. Lemma 5.1. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wptq P BpWp0q, τq for all 0 ď t ď t1 ´ 1. Then it holds that
1
t1
t1´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ` 2t1η NTRF t1η ` 3 2 ´ 4 apppτq ˘ .
Lemma 5.1 plays a central role in our proof. In specific, if Wptq P BpWp0q, τq for all t ď t1, then Lemma 5.1 implies that the average training loss is in the same order of NTRF as long as the linear approximation error apppτq is bounded by a positive constant. This is in contrast to the proof in Cao and Gu (2019), where apppτq appears as an additive term in the upper bound of the training loss, thus requiring apppτq “ Op NTRFq to achieve the same error bound as in Lemma 5.1. Since we can show that app “ rOpm´1{6q (See Section A.1), this suggests that m “ rΩp1q is sufficient to make the average training loss in the same order of NTRF.
Compared with the recent results for two-layer networks by Ji and Telgarsky (2020), Lemma 5.1 is proved with different techniques. In specific, the proof by Ji and Telgarsky (2020) relies on the 1-homogeneous property of the ReLU activation function, which limits their analysis to two-layer networks with fixed second layer weights. In comparison, our proof does not rely on homogeneity, and is purely based on the linear approximation property of neural networks and some specific properties of the loss function. Therefore, our proof technique can handle deep networks, and is potentially applicable to non-ReLU activation functions and other network architectures (e.g, Convolutional neural networks and Residual networks).
5.2 PROOF SKETCH OF THEOREM 3.3
Here we provide a proof sketch of Theorem 3.3. The proof consists of two steps: (i) showing that all T iterates stay close to initialization, and (ii) bounding the empirical loss achieved by gradient descent. Both of these steps are proved based on Lemma 5.1.
Proof sketch of Theorem 3.3. Recall that we choose W˚ P BpWp0q, Rm´1{2q such that n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF. We set τ “ rOpL1{2m´1{2Rq, which is chosen slightly larger than m´1{2R since Lemma 5.1 requires the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . Then by Lemmas 4.1 and B.3 in Cao and Gu (2019) we know that apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set m “ rΩpR8L22q to ensure that apppτq ď 1{8.
Then we proceed to show that all iterates stay inside the region BpWp0q, τq. Since the L.H.S. of Lemma 5.1 is strictly positive and apppτq ď 1{8, we have for all t ď T ,
}Wp0q ´W˚}2F ´ }Wptq ´W˚}2F ě ´2tη NTRF,
which gives an upper bound of }Wptq ´W˚}F . Then by the choice of η, T , triangle inequality, and a simple induction argument, we see that }Wptq ´Wp0q}F ď m´1{2R ` ? 2Tη NTRF “ rOpL1{2m´1{2Rq, which verifies that Wptq P BpWp0q, τq for t “ 0, . . . , T ´ 1. The second step is to show that GD can find a neural network with at most 3 NTRF training loss within T iterations. To show this, by the bound given in Lemma 5.1 with app ď 1{8, we drop the terms }Wptq ´W˚}2F and rearrange the inequality to obtain
1
T
T´1 ÿ t“0 LSpWptqq ď 1 ηT }Wp0q ´W˚}2F ` 2 NTRF.
We see that T is large enough to ensure that the first term in the bound above is smaller than NTRF. This implies that the best iterate among Wp0q, . . . ,WpT´1q achieves an empirical loss at most 3 NTRF.
6 CONCLUSION
In this paper, we established the global convergence and generalization error bounds of GD and SGD for training deep ReLU networks for the binary classification problem. We show that a network width condition that is polylogarithmic in the sample size n and the inverse of target error ´1 is sufficient to guarantee the learning of deep ReLU networks. Our results resolve an open question raised in Ji and Telgarsky (2020).
ACKNOWLEDGEMENT
We would like to thank the anonymous reviewers for their helpful comments. ZC, YC and QG are partially supported by the National Science Foundation CAREER Award 1906169, IIS-2008981 and Salesforce Deep Learning Research Award. DZ is supported by the Bloomberg Data Science Ph.D. Fellowship. The views and conclusions contained in this paper are those of the authors and should not be interpreted as representing any funding agencies.
A PROOF OF MAIN THEOREMS
In this section we provide the full proof of Theorems 3.3, 3.4 and 3.5.
A.1 PROOF OF THEOREM 3.3
We first provide the following lemma which is useful in the subsequent proof. Lemma A.1 (Lemmas 4.1 and B.3 in Cao and Gu (2019)). There exists an absolute constant κ such that, with probability at least 1 ´ OpnL2q expr´Ωpmτ2{3Lqs, for any τ ď κL´6rlogpmqs´3{2, it holds that
apppτq ď rO ` τ4{3L3m1{2 ˘ , Mpτq ď rOp ? mq.
Proof of Theorem 3.3. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. Note that to apply Lemma 5.1, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q (A.1)
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ. Then by Lemma 5.1, we have with probability at least 1´ δ, we have
}Wp0q ´W˚}2F ´ }Wpt 1q ´W˚}2F ě η
t1´1 ÿ t“0 LSpWptqq ´ 2t1η NTRF (A.2)
as long as Wp0q, . . . ,Wpt 1´1q P BpWp0q, τq. In the following proof we choose η “ ΘpL´1m´1q and T “ rLR2m´1η´1 ´1NTRFs.
We prove the theorem by two steps: 1) we show that all iterates tWp0q, ¨ ¨ ¨ ,WpT qu will stay inside the region BpWp0q, τq; and 2) we show that GD can find a neural network with at most 3 NTRF training loss within T iterations.
All iterates stay inside BpWp0q, τq. We prove this part by induction. Specifically, given t1 ď T , we assume the hypothesis Wptq P BpWp0q, τq holds for all t ă t1 and prove that Wpt1q P BpWp0q, τq. First, it is clear that Wp0q P BpWp0q, τq. Then by (A.2) and the fact that LSpWq ě 0, we have
}Wpt 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2ηt1 NTRF
Note that T “ rLR2m´1η´1 ´1NTRFs and W˚ P BpWp0q, R ¨m´1{2q, we have L ÿ
l“1 }Wpt 1q l ´W ˚ l }2F “ }Wpt 1q ´W˚}2F ď CLR2m´1,
where C ě 4 is an absolute constant. Therefore, by triangle inequality, we further have the following for all l P rLs,
}Wpt 1q l ´W p0q l }F ď }W pt1q l ´W ˚ l }F ` }W p0q l ´W ˚ l }F
ď ? CLRm´1{2 `Rm´1{2 ď 2 ? CLRm´1{2. (A.3)
Therefore, it is clear that }Wpt 1q l ´W p0q l }F ď 2
? CLRm´1{2 ď τ based on our choice of τ
previously. This completes the proof of the first part.
Convergence of gradient descent. (A.2) implies
}Wp0q ´W˚}2F ´ }WpT q ´W˚}2F ě η ˆ T´1 ÿ
t“0 LSpWptqq ´ 2T NTRF
˙
.
Dividing by ηT on the both sides, we get
1
T
T´1 ÿ t“0 LSpWptqq ď }Wp0q ´W˚}2F ηT ` 2 NTRF ď LR2m´1 ηT ` 2 NTRF ď 3 NTRF,
where the second inequality is by the fact that W˚ P BpWp0q, R ¨ m´1{2q and the last inequality is by our choices of T and η which ensure that Tη ě LR2m´1 ´1NTRF. Notice that T “ rLR2m´1η´1 ´1NTRFs “ OpL2R2 ´1 NTRFq. This completes the proof of the second part, and we are able to complete the proof.
A.2 PROOF OF THEOREM 3.4
Following Cao and Gu (2020), we first introduce the definition of surrogate loss of the network, which is defined by the derivative of the loss function.
Definition A.2. We define the empirical surrogate error ESpWq and population surrogate error EDpWq as follows:
ESpWq :“ ´ 1
n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ , EDpWq :“ Epx,yq„D ´ `1 “ y ¨ fWpxq ‰( .
The following lemma gives uniform-convergence type of results for ESpWq utilizing the fact that ´`1p¨q is bounded and Lipschitz continuous.
Lemma A.3. For any rR, δ ą 0, suppose that m “ rΩpL12 rR2q ¨ rlogp1{δqs3{2. Then with probability at least 1´ δ, it holds that
|EDpWq ´ ESpWq| ď rO ˜ min # 4LL3{2 rR c m
n , L rR? n ` L
3 rR4{3
m1{6
+¸ `O ˜ c
logp1{δq n
¸
for all W P BpWp0q, rR ¨m´1{2q
We are now ready to prove Theorem 3.4, which combines the trajectory distance analysis in the proof of Theorem 3.3 with Lemma A.3.
Proof of Theorem 3.4. With exactly the same proof as Theorem 3.3, by (A.3) and induction we have Wp0q,Wp1q, . . . ,WpT q P BpWp0q, rRm´1{2q with rR “ Op ? LRq. Therefore by Lemma A.3, we have
|EDpWptqq ´ ESpWptqq| ď rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for all t “ 0, 1, . . . , T . Note that we have 1tz ă 0u ď ´2`1pzq. Therefore,
EL0´1D pW ptqq ď 2EDpWptqq
ď 2LSpWptqq ` rO ˜ min # 4LL2R c m n , L3{2R? n ` L 11{3R4{3 m1{6 +¸ `O ˜ c logp1{δq n ¸
for t “ 0, 1, . . . , T , where the last inequality is by ESpWq ď LSpWq because ´`1pzq ď `pzq for all z P R. This finishes the proof.
A.3 PROOF OF THEOREM 3.5
In this section we provide the full proof of Theorem 3.5. We first give the following result, which is the counterpart of Lemma 5.1 for SGD. Again we pick W˚ P BpWp0q, Rm´1{2q such that the loss of the corresponding NTRF model FWp0q,W˚pxq achieves NTRF. Lemma A.4. Set η “ OpL´1Mpτq´2q. Suppose that W˚ P BpWp0q, τq and Wpn1q P BpWp0q, τq for all 0 ď n1 ď n´ 1. Then it holds that
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě
´3
2 ´ 4 apppτq
¯ η n1 ÿ
i“1 LipWpi´1qq ´ 2nη NTRF.
We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1ry ¨ fWpxqss, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Ji and Telgarsky, 2020). Our proof is based on the application of Lemma A.4 and an online-tobatch conversion argument (Cesa-Bianchi et al., 2004; Cao and Gu, 2019; Ji and Telgarsky, 2020). We introduce a surrogate loss EipWq “ ´`1ryi ¨ fWpxiqs and its population version EDpWq “ Epx,yq„Dr´`1py ¨ fWpxqqs, which have been used in (Ji and Telgarsky, 2018; Cao and Gu, 2019; Nitanda and Suzuki, 2019; Ji and Telgarsky, 2020).
Proof of Theorem 3.5. Recall that W˚ is chosen such that
1
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF
and W˚ P BpWp0q, Rm´1{2q. To apply Lemma A.4, we need the region BpWp0q, τq to include both W˚ and tWptqut“0,...,t1 . This motivates us to set τ “ rOpL1{2m´1{2Rq, which is slightly larger than m´1{2R. With this choice of τ , by Lemma A.1 we have apppτq “ rOpτ4{3m1{2L3q “ rOpR4{3L11{3m´1{6q. Therefore, we can set
m “ rΩpR8L22q
to ensure that apppτq ď 1{8, where rΩp¨q hides polylogarithmic dependencies on network depth L, NTRF function class size R, and failure probability parameter δ.
Then by Lemma A.4, we have with probability at least 1´ δ,
}Wp0q ´W˚}2F ´ }Wpn 1q ´W˚}2F ě η
n1 ÿ i“1 LipWpi´1qq ´ 2nη NTRF (A.4)
as long as Wp0q, . . . ,Wpn 1´1q P BpWp0q, τq.
We then prove Theorem 3.5 in two steps: 1) all iterates stay inside BpWp0q, τq; and 2) convergence of online SGD.
All iterates stay inside BpWp0q, τq. Similar to the proof of Theorem 3.3, we prove this part by induction. Assuming Wpiq satisfies Wpiq P BpWp0q, τq for all i ď n1 ´ 1, by (A.4), we have
}Wpn 1q ´W˚}2F ď }Wp0q ´W˚}2F ` 2nη NTRF
ď LR2 ¨m´1 ` 2nη NTRF,
where the last inequality is by W˚ P BpWp0q, Rm´1{2q. Then by triangle inequality, we further get
}Wpn 1q l ´W p0q l }F ď }W pn1q l ´W ˚ l }F ` }W˚l ´W p0q l }F
ď }Wpn 1q ´W˚}F ` }W˚l ´W p0q l }F ď Op ? LRm´1{2 `?nη NTRFq.
Then by our choices of η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^ L´1q ˘ , we have }Wpn1q ´Wp0q}F ď 2 ? LRm´1{2 ď τ . This completes the proof of the first part.
Convergence of online SGD. By (A.4), we have
}Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ě η ˆ n ÿ
i“1 LipWpi´1qq ´ 2n NTRF
˙
.
Dividing by ηn on the both sides and rearranging terms, we get
1
n
n ÿ i“1 LipWpi´1qq ď }Wp0q ´W˚}2F ´ }Wpnq ´W˚}2F ηn ` 2 NTRF ď L2R2 n ` 3 NTRF,
where the second inequality follows from facts that W˚ P BpWp0q, R ¨m´1{2q and η “ Θ ` m´1 ¨ pLR2n´1 ´1NTRF ^L´1q ˘
. By Lemma 4.3 in (Ji and Telgarsky, 2020) and the fact that EipWpi´1qq ď LipWpi´1qq, we have
1
n
n ÿ i“1 L0´1D pW pi´1qq ď 2 n n ÿ i“1 EDpWpi´1qq
ď 8 n
n ÿ i“1 EipWpi´1qq ` 8 logp1{δq n
ď 8L 2R2 n ` 8 logp1{δq n ` 24 NTRF.
This completes the proof of the second part.
B PROOF OF RESULTS IN SECTION 4
B.1 PROOF OF PROPOSITION 4.2
We first provide the following lemma which gives an upper bound of the neural network output at the initialization. Lemma B.1 (Lemma 4.4 in Cao and Gu (2019)). Under Assumption 3.1, if m ě C̄L logpnL{δq with some absolute constant C̄, with probability at least 1´ δ, we have
|fWp0qpxiq| ď C a logpn{δq
for some absolute constant C.
Proof of Proposition 4.2. Under Assumption 4.1, we can find a collection of matrices U˚ “ tU˚1 , ¨ ¨ ¨ ,U˚Lu with řL l“1 }U˚l }2F “ 1 such that yix∇fWp0qpxiq,U˚y ě m1{2γ for at least 1 ´ ρ fraction of the training data. By Lemma B.1, for all i P rns we have |fWp0qpxiq| ď C a
logpn{δq for some absolute constant C. Then for any positive constant λ, we have for at least 1´ ρ portion of the data,
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě m1{2λγ ´ C a logpn{δq.
For this fraction of data, we can set
λ “ C 1 “ log1{2pn{δq ` logp1{ q ‰
m1{2γ ,
where C 1 is an absolute constant, and get
m1{2λγ ´ C a logpn{δq ě logp1{ q.
Now we let W˚ “ Wp0q ` λU˚. By the choice of R in Proposition 4.2, we have W˚ P BpWp0q, R ¨ m´1{2q. The above inequality implies that for at least 1 ´ ρ fraction of data, we have ` `
yiFWp0q,W˚pxiq ˘ ď . For the rest data, we have
yi ` fWp0qpxiq ` x∇fWp0q , λU˚y ˘ ě ´C a logpn{δq ´ λ}∇fWp0q}22 ě ´C1R
for some absolute positive constant C1, where the last inequality follows from fact that }∇fWp0q}2 “ rOpm1{2q (see Lemma A.1 for detail). Then note that we use cross-entropy loss, it follows that for this fraction of training data, we have ` `
yiFWp0q,W˚pxiq ˘ ď C2R for some constant C2. Combining the results of these two fractions of training data, we can conclude
NTRF ď n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď p1´ ρq ` ρ ¨OpRq
This completes the proof.
B.2 PROOF OF PROPOSITION 4.4
Proof of Proposition 4.4. We are going to prove that Assumption 4.3 implies the existence of a good function in the NTRF function class.
By Definition 3.2 and the definition of cross-entropy loss, our goal is to prove that there exists a collection of matrices W “ tW1,W2u satisfying maxt}W1 ´Wp0q1 }F , }W2 ´W p0q 2 }2u ď R ¨m´1{2 such that yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰
ě logp2{ q. We first consider∇W1fWp0qpxiq, which has the form
p∇W1fWp0qpxiq ˘ j “ m1{2 ¨ wp0q2,j ¨ σ 1pxwp0q1,j ,xiyq ¨ xi.
Note that wp0q2,j and w p0q 1,j are independently generated from N p0, 1{mq and N p0, 2I{mq respectively, thus we have Pp|wp0q2,j | ě 0.47m´1{2q ě 1{2. By Hoeffeding’s inequality, we know that with probability at least 1 ´ expp´m{8q, there are at least m{4 nodes, whose union is denoted by S, satisfying |wp0q2,j | ě 0.47m´1{2. Then we only focus on the nodes in the set S. Note that W p0q 1 and W p0q 2 are independently generated. Then by Assumption 4.3 and Hoeffeding’s inequality, there exists a function up¨q : Rd Ñ Rd such that with probability at least 1´ δ1,
1
|S| ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq ě γ ´
d
2 logp1{δ1q |S| .
Define vj “ upwp0q1,j q{w2,j if |w2,j | ě 0.47m´1{2 and vj “ 0 otherwise. Then we have m ÿ
j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq “ ÿ jPS yi ¨ xupwp0q1,j q,xiy ¨ σ 1pxwp0q1,j ,xiyq
ě |S|γ ´ a 2|S| logp1{δ1q. Set δ “ 2nδ1 and apply union bound, we have with probability at least 1´ δ{2,
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ ´ a 2|S| logp2n{δq.
Therefore, note that with probability at least 1 ´ expp´m{8q, we have |S| ě m{4. Moreover, in Assumption 4.3, by yi P t˘1u and |σ1p¨q|, }up¨q}2, }xi}2 ď 1 for i “ 1, . . . , n, we see that γ ď 1. Then if m ě 32 logpn{δq{γ2, with probability at least 1´ δ{2´ exp ` ´ 4 logpn{δq{γ2 ˘
ě 1´ δ, m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě |S|γ{2.
Let U “ pv1,v2, ¨ ¨ ¨ ,vmqJ{ a m|S|, we have
yix∇W1fWp0qpxiq,Uy “ 1 a
|S|
m ÿ j“1 yi ¨ wp0q2,j ¨ xvj ,xiy ¨ σ 1pxwp0q1,j ,xiyq ě a |S|γ 2 ě m 1{2γ 4 ,
where the last inequality is by the fact that |S| ě m{4. Besides, note that by concentration and Gaussian tail bound, we have |fWp0qpxiq| ď C logpn{δq for some absolute constant C. Therefore, let W1 “Wp0q1 ` 4 ` logp2{ q ` C logpn{δq ˘ m´1{2U{γ and W2 “Wp0q2 , we have
yi ¨ “ fWp0qpxiq ` x∇W1fWp0q ,W1 ´W p0q 1 y ` x∇W2fWp0q ,W2 ´W p0q 2 y ‰ ě logp2{ q. (B.1)
Note that }up¨q}2 ď 1, we have }U}F ď 1{0.47 ď 2.2. Therefore, we further have }W1 ´ W p0q 1 }F ď 8.8γ´1 ` logp2{ q ` C logpn{δq ˘
¨ m´1{2. This implies that W P BpWp0q, Rq with R “ O ` log ` n{pδ q ˘ {γ ˘
. Applying the inequality `plogp2{ qq ď on (B.1) gives `pyi ¨ FWp0q,Wpxiqq ď
for all i “ 1, . . . , n. This completes the proof.
B.3 PROOF OF PROPOSITION 4.6
Based on our theoretical analysis, the major goal is to show that there exist certain choices of R and m such that the best NTRF model in the function class FpWp0q, Rq can achieve training error. In this proof, we will prove a stronger results by showing that given the quantities of R and m specificed in Proposition 4.6, there exists a NTRF model with parameter W˚ that satisfies n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . In order to do so, we consider training the NTRF model via a different surrogate loss function. Specifically, we consider squared hinge loss r`pxq “ ` maxtλ´ x, 0u ˘2
, where λ denotes the target margin. In the later proof, we choose λ “ logp1{ q ` 1 such that the condition r`pxq ď 1 can guarantee that x ě logp q. Moreover, we consider using gradient flow, i.e., gradient descent with infinitesimal step size, to train the NTRF model. Therefore, in the remaining part of the proof, we consider optimizing the NTRF parameter W with the loss function
rLSpWq “ 1
n
n ÿ
i“1
r` ` yiFWp0q,Wpxiq ˘ .
Moreover, for simplicity, we only consider optimizing parameter in the last hidden layer (i.e., WL´1). Then the gradient flow can be formulated as
dWL´1ptq dt “ ´∇WL´1 rLSpWptqq, dWlptq dt “ 0 for any l ‰ L´ 1.
Note that the NTRF model is a linear model, thus by Definition 3.2, we have
∇WL´1 rLSpWptqq “ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇WL´1FWp0q,Wptqpxiq
“ yir`1 ` yiFWp0q,Wptqpxiq ˘ ¨∇ W p0q L´1 fWp0qpxiq. (B.2)
Then it is clear that∇WL´1 rLSpWptqq has fixed direction throughout the optimization. In order to prove the convergence of gradient flow and characterize the quantity of R, We first provide the following lemma which gives an upper bound of the NTRF model output at the initialization.
Then we provide the following lemma which characterizes a lower bound of the Frobenius norm of the partial gradient∇WL´1 rLSpWq.
Lemma B.2 (Lemma B.5 in Zou et al. (2019)). Under Assumptions 3.1 and 4.5, if m “ rΩpn2φ´1q, then for all t ě 0, with probability at least 1´ exp ` ´Opmφ{nq ˘
, there exist a positive constant C such that
}∇WL´1 rLSpWptqq}2F ě Cmφ
n5
„ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
2
.
We slightly modified the original version of this lemma since we use different models (we consider NTRF model while Zou et al. (2019) considers neural network model). However, by (B.2), it is clear that the gradient ∇rLSpWq can be regarded as a type of the gradient for neural network model at the initialization (i.e.,∇WL´1LSpWp0qq) is valid. Now we are ready to present the proof.
Proof of Proposition 4.6. Recall that we only consider training the last hidden weights, i.e., WL´1, via gradient flow with squared hinge loss, and our goal is to prove that gradient flow is able to find a NTRF model within the function class FpWp0q, Rq around the initialization, i.e., achieving n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Let Wptq be the weights at time t, gradient flow implies that
drLSpWptqq dt “ ´}∇WL´1 rLSpWptqq}2F ď ´ Cmφ n5
ˆ n ÿ
i“1
r`1 ` yiFWp0q,Wptqpxiq ˘
˙2
“ 4Cmφ rLSpWptqq n3 ,
where the first equality is due to the fact that we only train the last hidden layer, the first inequality is by Lemma B.2 and the second equality follows from the fact that r`1p¨q “ ´2 b
r`p¨q. Solving the above inequality gives
rLSpWptqq ď rLSpWp0qq ¨ exp ˆ ´ 4Cmφt n3 ˙ . (B.3)
Then, set T “ O ` n3m´1φ´1 ¨ logprLSpWp0qq{ 1q ˘ and 1 “ 1{n, we have rLSpWptqq ď 1. Then it follows that r` `
yiFWp0q,Wptqpxiq ˘ ď 1, which implies that yiFWp0q,Wptqpxiq ě logp q and thus n´1
řn i“1 ` ` yiFWp0q,W˚pxiq ˘ ď . Therefore, WpT q is exactly the NTRF model we are looking for.
The next step is to characterize the distance between WpT q and Wp0q in order to characterize the quantity of R. Note that }∇WL´1 rLSpWptqq}2F ě 4CmφrLSpWptqq{n3, we have
d
b
rLSpWptqq dt “ ´ }∇WL´1 rLSpWptqq}2F
2
b rLSpWptqq ď ´}∇WL´1 rLSpWptqq}F ¨
C1{2m1{2φ1{2
n3{2 .
Taking integral on both sides and rearranging terms, we have ż T
t“0 }∇WL´1 rLSpWptqq}Fdt ď
n3{2 C1{2m1{2φ1{2 ¨ ˆ b rLSpWp0qq ´ b rLSpWptqq ˙ .
Note that the L.H.S. of the above inequality is an upper bound of }Wptq ´Wp0q}F , we have for any t ě 0,
}Wptq ´Wp0q}F ď n3{2 C1{2m1{2φ1{2 ¨ b rLSpWp0qq “ O ˆ n3{2 log ` n{pδ q ˘ m1{2φ1{2 ˙ ,
where the second inequality is by Lemma B.1 and our choice of λ “ logp1{ q ` 1. This implies that there exists a point W˚ within the class FpWp0q, Rq with
R “ O ˆ n3{2 log ` n{pδ q ˘
φ1{2
˙
such that
NTRF :“ n´1 n ÿ
i“1 ` ` yiFWp0q,W˚pxiq ˘ ď .
Then by Theorem 3.3, and, more specifically, (A.1), we can compute the minimal required neural network width as follows,
m “ rΩpR8L22q “ rΩ ˜ L22n12
φ4
¸
.
This completes the proof.
C PROOF OF TECHNICAL LEMMAS
Here we provide the proof of Lemmas 5.1, A.3 and A.4.
C.1 PROOF OF LEMMA 5.1
The detailed proof of Lemma 5.1 is given as follows.
Proof of Lemma 5.1. Based on the update rule of gradient descent, i.e., Wpt`1q “ Wptq ´ η∇WLSpWptqq, we have the following calculation.
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
“ 2η n
n ÿ i“1 xWptq ´W˚,∇WLipWptqqy loooooooooooooooooooooomoooooooooooooooooooooon
I1
´ η2 L ÿ
l“1 }∇WlLSpWptqq}2F loooooooooooooomoooooooooooooon
I2
, (C.1)
where the equation follows from the fact that LSpWptqq “ n´1 řn i“1 LipWptqq. In what follows, we first bound the term I1 on the R.H.S. of (C.1) by approximating the neural network functions with linear models. By assumption, for t “ 0, . . . , t1 ´ 1, Wptq,W˚ P BpWp0q, τq. Therefore by the definition of apppτq,
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ fW˚pxiq ˘ ` apppτq (C.2)
Moreover, we also have
0 ď yi ¨ ` fW˚pxiq ´ fWp0qpxiq ´ x∇fWp0qpxiq,W˚ ´Wp0qy ˘ ` apppτq “ yi ¨ ` fW˚pxiq ´ FWp0q,W˚pxiq ˘ ` apppτq, (C.3)
where the equation follows by the definition of FWp0q,W˚pxq. Adding (C.3) to (C.2) and canceling the terms yi ¨ fW˚pxiq, we obtain that
yi ¨ x∇fWptqpxiq,Wptq ´W˚y ď yi ¨ ` fWptqpxiq ´ FWp0q,W˚pxiq ˘ ` 2 apppτq. (C.4)
We can now give a lower bound on first term on the R.H.S. of (C.1). For i “ 1, . . . , n, applying the chain rule on the loss function gradients and utilizing (C.4), we have
xWptq ´W˚,∇WLipWptqqy “ `1 ` yifWptqpxiq ˘ ¨ yi ¨ xWptq ´W˚,∇WfWptqpxiqy ě `1 `
yifWptqpxiq ˘ ¨ ` yifWptqpxiq ´ yifW˚pxiq ` 2 apppτq ˘
ě p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘ , (C.5)
where the first inequality is by the fact that `1 ` yifWptqpxiq ˘ ă 0, the second inequality is by convexity of `p¨q and the fact that ´`1 `
yifWptqpxiq ˘ ď ` ` yifWptqpxiq ˘ .
We now proceed to bound the term I2 on the R.H.S. of (C.1). Note that we have `1p¨q ă 0, and therefore the Frobenius norm of the gradient∇WlLSpWptqq can be upper bounded as follows,
}∇WlLSpWptqq}F “ › › ›
›
1
n
n ÿ i“1 `1 ` yifWptqpxiq ˘ ∇WlfWptqpxiq › › › › F
ď 1 n
n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ¨ }∇WlfWptqpxiq}F ,
where the inequality follows by triangle inequality. We now utilize the fact that cross-entropy loss satisfies the inequalities ´`1p¨q ď `p¨q and ´`1p¨q ď 1. Therefore by definition of Mpτq, we have
L ÿ l“1 }∇WlLSpWptqq}2F ď O ` LMpτq2 ˘ ¨ ˆ 1 n n ÿ i“1 ´`1 ` yifWptqpxiq ˘ ˙2
ď O ` LMpτq2 ˘ ¨ LSpWptqq. (C.6)
Then we can plug (C.5) and (C.6) into (C.1) and obtain
}Wptq ´W˚}2F ´ }Wpt`1q ´W˚}2F
ě 2η n
n ÿ
i“1
”
p1´ 2 apppτqq` ` yifWptqpxiq ˘ ´ ` ` yiFWp0q,W˚pxiq ˘
ı
´O ` η2LMpτq2 ˘ ¨ LSpWptqq
ě „ 3
2 ´ 4 apppτq
ηLSpWptqq ´ 2η
n
n ÿ i“1 ` ` yiFWp0q,W˚pxiq ˘ ,
where the last inequality is by η “ OpL´1Mpτq´2q and merging the third term on the second line into the first term. Taking telescope sum from t “ 0 to t “ t1 ´ 1 and plugging in the definition 1 n řn i“1 ` ` yiFWp0q,W˚pxiq ˘ “ NTRF completes the proof.
C.2 PROOF OF LEMMA A.3
Proof of Lemma A.3. We first denote W “ BpWp0q, rR ¨ m´1{2q, and define the corresponding neural network function class and surrogate loss function class as F “ tfWpxq : W P Wu and G “ t´`ry ¨ fWpxqs : W PWu respectively. By standard uniform convergence results in terms of empirical Rademacher complexity (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), with probability at least 1´ δ we have
sup WPW |ESpWq ´ EDpWq| “ sup WPW
ˇ
ˇ
ˇ
ˇ
ˇ ´ 1 n
n ÿ i“1 `1 “ yi ¨ fWpxiq ‰ ` Epx,yq„D`1 “ y ¨ fWpxq ‰
ˇ
ˇ
ˇ
ˇ
ˇ
ď 2pRnpGq ` C1
c
logp1{δq n ,
where C1 is an absolute constant, and
pRnpGq “ Eξi„Unifpt˘1uq
#
sup WPW
1
n
n ÿ i“1 ξi` 1“yi ¨ fWpxiq ‰
+
is the empirical Rademacher complexity of the function class G. We now provide two bounds on pRnpGq, whose combination gives the final result of Lemma A.3. First, by Corollary 5.35 in (Vershynin, 2010), with probability at least 1 ´ L ¨ expp´Ωpmqq, }Wp0ql }2 ď 3 for all l P rLs. Therefore for all W PW , we have }Wl}2 ď 4. Moreover, standard concentration inequalities on the norm of the first row of Wp0ql also implies that }Wl}2 ě 0.5 for all W PW and l P rLs. Therefore, an adaptation of the bound in (Bartlett et al., 2017)¶ gives
pRnpFq ď rO ˜
sup WPW
# m1{2? n ¨ « L ź
l“1 }Wl}2
ff ¨ « L ÿ
l“1
}WJl ´W p0qJ l } 2{3 2,1
}Wl}2{32
ff3{2+¸
ď rO ˜
sup WPW
#
4Lm1{2? n ¨ « L ÿ l“1 p ? m ¨ }WJl ´W p0qJ l }F q 2{3
ff3{2+¸
ď rO ˜ 4LL3{2 rR ¨ c m
n
¸
. (C.7)
We now derive the second bound on pRnpGq, which is inspired by the proof provided in (Cao and Gu, 2020). Since y P t`1, 1u, |`1pzq| ď 1 and `1pzq is 1-Lipschitz continuous, by standard empirical Rademacher complexity bounds (Bartlett and Mendelson, 2002; Mohri et al., 2018; Shalev-Shwartz and Ben-David, 2014), we have
pRnpGq ď pRnpFq “ Eξi„Unifpt˘1uq
«
sup WPW
1
n
n ÿ i“1 ξifWpxiq
ff
,
¶Bartlett et al. (2017) only proved the Rademacher complexity bound for the composition of the ramp loss and the neural network function. In our setting essentially the ramp loss is replaced with the ´`1p¨q function, which is bounded and 1-Lipschitz continuous. The proof in our setting is therefore exactly the same as the proof given in (Bartlett et al., 2017), and we can apply Theorem 3.3 and Lemma A.5 in (Bartlett et al., 2017) to obtain the desired bound we present here.
where pRnpFq is the empirical Rademacher complexity of the function class F . We have
pRnrFs ď Eξ
#
sup WPW
1
n
n ÿ i“1 ξi “ fWpxiq ´ FWp0q,Wpxiq ‰
+
looooooooooooooooooooooooooooomooooooooooooooooooooooooooooon
I1
`Eξ
#
sup WPW
1
n
n ÿ i“1 ξiFWp0q,Wpxiq
+
looo | 1. What are the key contributions and novel aspects introduced by the paper regarding deep neural networks' generalization properties?
2. How does the reviewer assess the clarity, quality, and organization of the paper's content?
3. Are there any questions or suggestions regarding the presentation of the paper, such as highlighting advantages or providing additional explanations?
4. What are the strengths and weaknesses of the paper compared to prior works, particularly Ji and Telgarsky (2020)?
5. Are there any concerns about specific assumptions made in the paper, such as Assumption 4.1, and their potential impact on practical applications? | Review | Review
The paper analyses the generalization properties of deep neural networks for classification tasks. The authors focus on the Neural Tangent Random Features (NTRF) class of functions to investigate these generalization properties. In particular, the authors study the convergence of gradient descent (and its stochastic version). They establish that the training loss decreases until a certain level of the order of the best they can expect in the NTRF class. Then, the authors establish interesting generalization results with respect to the 0-1 loss. This problem has been tackled recently in many papers. However, to establish similar results as those proved in this paper, the previous papers required a number of neurons per hidden layer to be polynomial with respect to the number of samples n. The authors shows here that a logarithmic (to a certain power) is actually sufficient when working with the NRTF class. Such results have been obtained recently in Ji and Telgarski (2020) but only for shallow network. In this paper, the results are generalized for deep neural networks.
The paper is clearly written and the comparison with the literature is complete and well-organized. The authors have worked hard to present, in a concise way and clearly their results. I appreciate it. The paper is fluid and nice to read.
Here are some remarks that might improve the presentation of the paper:
It would be nice to recall the function
σ
in Section 2
In my opinion the remark: " Moreover, while Ji and Telgarsky (2020) essentially required all training data to be separable by a function in the NTRF function class with a constant margin, our result does not require such data separation assumptions, and allows the NTRF function class to misclassify a small proportion of the training data points", is a big advantage of your analysis. Probably, you should spotlight it more clearly.
Page 12 in the proof of Theorem 3.4. In the last inequality of the page, could you add that it holds because
−
ℓ
′
(
x
)
≥
ℓ
(
x
)
for all
x
∈
R
d
.
Proposition 4.2: you use two notations for the proportion of data not satisfying the margin assumption. You should replace all the
σ
by
ρ
.
Page 18: "where the equation follows from the fact that" -->
W
0
should be
W
t
Before Equation (C.7). Could you add parenthesis around the product with respect to
l
. Otherwise it is not very clear.
Could you explain a bit more Assumption 4.1. Is it a strong assumption ? Do you have any insight wether it is verified in practice ?
To summarize, I enjoyed to review this paper. The results are clear and interesting. The paper deserves to be accepted for publication. |
ICLR | Title
Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data
Abstract
Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping. Even though some combinations of transformations might never appear (e.g., an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations. Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition. Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries. We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10.
1 INTRODUCTION
Thorough experimentation in the fields of psychology and neuroscience has provided support to the intuition that our visual perception and cognition systems are able to identify familiar objects despite modifications in size, location, background, viewpoint and lighting (Bruce & Humphreys, 1994). Interestingly, we are not just able to recognize such modified objects, but are able to characterize which modifications have been applied to them as well. As an example, when we see a picture of a cat, we are not just able to tell that there is a cat in it, but also its position, its size, facts about the lighting conditions of the picture, and so forth. Such observations suggest that the human visual system is equivariant to a large transformation group containing translation, rotation, scaling, among others. In other words, the mental representation obtained by seeing a transformed version of an object, is equivalent to that of seeing the original object and transforming it mentally next.
These fascinating abilities exhibited by biological visual systems have inspired a large field of research towards the development of neural architectures able to replicate them. Among these, the most popular and successful approach is the Convolutional Neural Network (CNN) (LeCun et al., 1989), which incorporates equivariance to translation via convolution. Unfortunately, in counterpart to the human visual system, CNNs do not exhibit equivariance to other transformations encountered in visual data (e.g., rotations). Interestingly, however, if an ordinary CNN happens to learn rotated copies of the same filter, the stack of feature maps becomes equivariant to rotations even though individual feature maps are not (Cohen & Welling, 2016). Since ordinary CNNs must learn such rotated copies independently, they effectively utilize an important number of network parameters suboptimally to this end (see Fig. 3 in Krizhevsky et al. (2012)). Based on the idea that equivariance in CNNs can be extended to larger transformation groups by stacking convolutional feature maps, several approaches have emerged to extend equivariance to, e.g., planar rotations (Dieleman et al., 2016; Marcos et al., 2017; Weiler et al., 2018; Li et al., 2018), spherical rotations (Cohen et al., 2018; Worrall & Brostow, 2018; Cohen et al., 2019), scaling (Marcos et al., 2018; Worrall & Welling, 2019) and general transformation groups (Cohen & Welling, 2016), such that transformed copies of a single entity are not required to be learned independently.
Although incorporating equivariance to arbitrary transformation groups is conceptually and theoretically similar1, evidence from real-world experiences motivating their integration might strongly differ. Several studies in neuroscience and psychology have shown that our visual system does not react equally to all transformations we encounter in visual data. Take, for instance, translation and rotation. Although we easily recognize objects independently of their position of appearance, a large corpus of experimental research has shown that this is not always the case for in-plane rotations. Yin (1969) showed that mono-oriented objects, i.e., complex objects such as faces which are customarily seen in one orientation, are much more difficult to be accurately recognized when presented upsidedown. This behaviour has been reproduced, among others, for magazine covers (Dallett et al., 1968), symbols (Henle, 1942) and even familiar faces (e.g., from classmates) (Brooks & Goldstein, 1963). Intriguingly, Schwarzer (2000) found that this effect exacerbates with age (adults suffer from this effect much more than children), but, adults are much faster and accurate in detecting mono-oriented objects in usual orientations. Based on these studies, we draw the following conclusions:
• The human visual system does not perform (fully) equivariant feature transformations to visual data. Consequently, it does not react equally to all possible input transformations encountered in visual data, even if they belong to the same transformation group (e.g., in-plane rotations).
• The human visual system does not just encode familiarity to objects but seems to learn through experience the poses in which these objects customarily appear in the environment to assist and improve object recognition (Freire et al., 2000; Riesenhuber et al., 2004; Sinha et al., 2006).
Complementary studies (Tarr & Pinker, 1989; Oliva & Torralba, 2007) suggest that our visual system encodes orientation atypicality relative to the context rather than on an absolute manner (Fig. 1). Motivated by the aforementioned observations we state the co-occurrence envelope hypothesis:
The Co-occurrence Envelope Hypothesis. By allowing equivariant feature mappings to detect transformations that co-occur in the data and focus learning on the set formed by these co-occurrent transformations (i.e., the co-occurrence envelope of the data), one is able to induce learning of more representative feature representations of the data, and, resultantly, enhance the descriptive power of neural networks utilizing them. We refer to one such feature mapping as co-attentive equivariant.
Identifying the co-occurrence envelope. Consider a rotation equivariant network receiving two copies of the same face (Fig. 2a). A conventional rotation equivariant network is required to perform inference and learning on the set of all possible orientations of the visual patterns constituting a face regardless of the input orientation (Fig. 2b). However, by virtue of its rotation equivariance, it is able to recognize rotated faces even if it is trained on upright faces only. A possible strategy to simplify the task at hand could be to restrict the network to react exclusively to upright faces (Fig. 2c). In this case, the set of relevant visual pattern orientations becomes much smaller, at the expense of disrupting equivariance to the rotation group. Resultantly, the network would risk becoming unable to detect faces in any other orientation than those it is trained on. A better strategy results from restricting the set of relevant pattern orientations by defining them relative to one another
1It is achieved by developing feature mappings that utilize the transformation group in the feature mapping itself (e.g., translating a filter in the course of a feature transformation is used to obtain translation equivariance).
(e.g., mouth orientation relative to the eyes) as opposed to absolutely (e.g., upright mouth) (Fig. 2d). In such a way, we are able to exploit information about orientation co-occurrences in the data without disrupting equivariance. The set of co-occurrent orientations in Fig. 2d corresponds to the co-occurrence envelope of the samples in Fig. 2a for the transformation group defined by rotations.
In this work, we introduce co-attentive equivariant feature mappings and apply them on existing equivariant neural architectures. To this end, we leverage the concept of attention (Bahdanau et al., 2014) to modify existing mathematical frameworks for equivariance, such that co-occurrent transformations can be detected. It is critical not to disrupt equivariance in the attention procedure as to preserve it across the entire network. To this end, we introduce cyclic equivariant self-attention, a novel attention mechanism able to preserve equivariance to cyclic groups.
Experiments and results. We explore the effects of co-attentive equivariant feature mappings for single and multiple symmetry groups. Specifically, we replace conventional rotation equivariant mappings in p4-CNNs (Cohen & Welling, 2016) and DRENs (Li et al., 2018) with co-attentive ones. We show that co-attentive rotation equivariant neural networks consistently outperform their conventional counterparts in fully (rotated MNIST) and partially (CIFAR-10) rotational settings. Subsequently, we generalize cyclic equivariant self-attention to multiple similarity groups and apply it on p4m-CNNs (Cohen & Welling, 2016) (equivariant to rotation and mirror reflections). Our results are in line with those obtained for single symmetry groups and support our stated hypothesis.
Contributions.
• We propose the co-occurrence envelope hypothesis and demonstrate that conventional equivariant mappings are consistently outperformed by our proposed co-attentive equivariant ones.
• We generalize co-attentive equivariant mappings to multiple symmetry groups and provide, to the best of our knowledge, the first attention mechanism acting generally on symmetry groups.
2 PRELIMINARIES
Equivariance. We say that a feature mapping f : X → Y is equivariant to a (transformation) group G (orG-equivariant) if it commutes with actions of the groupG acting on its domain and codomain:
f(TXg (x)) = T Y g (f(x)) ∀g ∈ G, x ∈ X (1)
where T (·)g denotes a group action in the corresponding space. In other words, the ordering in which we apply a group action Tg and the feature mapping f is inconsequential. There are multiple reasons as of why equivariant feature representations are advantageous for learning systems. Since group actions TXg produce predictable and interpretable transformations T Y g in the feature space, the hypothesis space of the model is reduced (Weiler et al., 2018) and the learning process simplified (Worrall et al., 2017). Moreover, equivariance allows the construction of L-layered networks by
stacking several equivariant feature mappings {f (1), ..., f (l), ..., f (L)} such that the input structure as regarded by the group G is preserved (e.g., CNNs and input translations). As a result, an arbitrary intermediate network representation (f (l) ◦ ... ◦ f (1))(x), l ∈ L, is able to take advantage of the structure of x as well. Invariance is an special case of equivariance in which TYg = IdY , the identity, and thus all group actions in the input space are mapped to the same feature representation.
Equivariant neural networks. In neural networks, the integration of equivariance to arbitrary groups G has been achieved by developing feature mappings f that utilize the actions of the group G in the feature mapping itself. Interestingly, equivariant feature mappings encode equivariance as parameter sharing with respect to G, i.e., the same weights are reused for all g ∈ G. This makes the inclusion of larger groups extremely appealing in the context of parameter efficient networks.
Conventionally, the l-th layer of a neural network receives a signal x(l)(u, λ) (where u ∈ Z2 is the spatial position and λ ∈ Λl is the unstructured channel index, e.g., RGB channels in a color image), and applies a feature mapping f (l) : Z2 × Λl → Z2 × Λl+1 to generate the feature representation x(l+1)(u, λ). In CNNs, the feature mapping f (l) := f (l)T is defined by a convolution
2 (?R2 ) between the input signal x(l) and a learnable convolutional filter W (l)λ′,λ, λ ′ ∈ Λl, λ ∈ Λl+1:
x(l+1)(u, λ) = [x(l) ?R2 W (l) λ′,λ](u, λ) = ∑ λ′,u′ x(l)(u+ u′, λ′)W (l) λ′,λ(u ′) (2)
By sliding W (l)λ′,λ across u, CNNs are able to preserve the spatial structure of the input x through the feature mapping f lT and successfully provide equivariance to the translation group T = (Z2,+).
The underlying idea for the extension of equivariance to larger groups in CNNs is conceptually equivalent to the strategy utilized by LeCun et al. (1989) for translation equivariance. Consider, for instance, the inclusion of equivariance to the set of rotations by θr degrees3: Θ = {θr = r 2πrmax } rmax r=1. To this end, we modify the feature mapping f (l) := f (l)R : Z2×Θ×Λl → Z2×Θ×Λl+1 to include the rotations defined by Θ. Let x(l)(u, r, λ) and W (l)λ′,λ(u, r) be the input and the convolutional filter with an affixed index r for rotation. The roto-translational convolution (?R2oΘ) f (l) R is defined as:
x(l+1)(u, r, λ) = [x(l) ?R2oΘ W (l) λ′,λ](u, r, λ) = ∑ λ′,r′,u′ x(l)(u+ u′, r′, λ′)W (l) λ′,λ(θru ′, r′ − r) (3)
Since f (l)R produces (dim(Θ) = rmax) times more output feature maps than f (l) T , we need to learn much smaller convolutional filters W (l)λ′,λ to produce the same number of output feature channels.
Learning equivariant neural networks. Consider the change of variables g = u, G = Z2, g ∈ G and g = (u, r), G = Z2oΘ, g ∈ G in Eq. 2 and Eq. 3, respectively. In general, neural networks are learned via backpropagation (LeCun et al., 1989) by iteratively applying the chain rule of derivation to update the network parameters. Intuitively, the networks outlined in Eq. 2 and Eq. 3 obtain feedback from all g ∈ G and, resultantly, are inclined to learn feature representations that perform optimally on the entire group G. However, as outlined in Fig. 2 and Section 1, several of those feature combinations are not likely to simultaneously appear. Resultantly, the hypothesis space of the model as defined by Weiler et al. (2018) might be further reduced.
Note that this reasoning is tightly related to existing explanations for the large success of spatial (Xu et al., 2015; Woo et al., 2018; Zhang et al., 2018) and temporal (Luong et al., 2015; Vaswani et al., 2017; Mishra et al., 2017; Zhang et al., 2018) attention in deep learning architectures.
3 CO-ATTENTIVE EQUIVARIANT NEURAL NETWORKS
In this section we define co-attentive feature mappings and apply them in the context of equivariant neural networks (Figure 3). To this end, we introduce cyclic equivariant self-attention and utilize it to construct co-attentive rotation equivariant neural networks. Subsequently, we show that cyclic equivariant self-attention is extendable to larger symmetry groups and make use of this fact to construct co-attentive neural networks equivariant to rotations and mirror reflections.
2Formally it is as a correlation. However, we hold on to the standard deep learning terminology. 3The reader may easily verify that Θ (and hence Z2 o Θ, with (o) the semi-direct product) forms a group.
3.1 CO-ATTENTIVE ROTATION EQUIVARIANT NEURAL NETWORKS
To allow rotation equivariant networks to utilize and learn co-attentive equivariant representations, we introduce an attention operator A(l) on top of the roto-translational convolution f (l)R with which discernment along the rotation axis r of the generated feature responses x(l)(u, r, λ) is possible. Formally, our co-attentive rotation equivariant feature mapping f (l)R is defined as follows:
x(l+1) = f (l) R (x (l)) = A(l)(f (l)R (x (l))) = A(l) ( [x(l) ?R2oΘ W (l) λ′,λ]) (4)
Theoretically, A(l) could be defined globally over f (l)R (x(l)) (i.e., simultaneously along u, r, λ) as depicted in Eq. 4. However, we apply attention locally to: (1) grant the algorithm enough flexibility to attend locally to the co-occurrence envelope of feature representations and, (2) utilize attention exclusively along the rotation axis r, such that our contributions are clearly separated from those possibly emerging from spatial attention. To this end, we apply attention pixel-wise on top of f
(l) R (x (l)) (Eq. 5). Furthermore, we assign a single attention instance A(l)λ to each learned feature representation and utilize it across the spatial dimension of the output feature maps4:
x(l+1)(u, r, λ) = A(l)λ ({x (l+1)(u, r̂, λ)}rmaxr̂=1)(r) (5)
Attention and self-attention. Consider a source vector x = (x1, ..., xn) and a target vector y = (y1, ..., ym). In general, an attention operator A leverages information from the source vector x (or multiple feature mappings thereof) to estimate an attention matrix A ∈ [0, 1]n×m, such that: (1) the elementAi,j provides an importance assessment of the source element xi with reference to the target element yj and (2) the sum of importance over all xi is equal to one: ∑ iAi,j = 1. Subsequently, the matrix A is utilized to modulate the original source vector x as to attend to a subset of relevant source positions with regard to yj : x̃j = (A:,j)T x (where is the Hadamard product). A special case of attention is that of self-attention (Cheng et al., 2016), in which the target and the source vectors are equal (y := x). In other words, the attention mechanism estimates the influence of the sequence x on the element xj for its weighting.
4For a more meticulous discussion on how Eq. 5 attains co-occurrent attention, see Appendix A.
In general, the attention matrix5 A ∈ [0, 1]n×m is constructed via nonlinear space transformations fà : Rn → Rn×m of the source vector x, on top of which the softmax function is applied: A:,j = softmax(fÃ(x):,j). This ensures that the properties previously mentioned hold. Typically, the mappings fà found in literature take feature transformation pairs of x as input (e.g., {s,H} in RNNs (Luong et al., 2015), {Q,K} in self-attention networks (Vaswani et al., 2017)), and perform (non)-linear mappings on top of it, ranging from multiple feed-forward layers (Bahdanau et al., 2014) to several operations between the transformed pairs (Luong et al., 2015; Vaswani et al., 2017; Mishra et al., 2017; Zhang et al., 2018). Due to the computational complexity of these approaches and the fact that we do extensive pixel-wise usage of fà on every network layer, their direct integration in our framework is computationally prohibitive. To circumvent this problem, we modify the usual self-attention formulation as to enhance its descriptive power in a much more compact setting.
Compact local self-attention. Initially, we relax the range of values of A from [0, 1]n×n to Rn×n. This allows us to encode much richer relationships between element pairs (xi, xj) at the cost of less interpretability. Subsequently, we define A = xT Ã, where à ∈ Rn×n is a matrix of learnable parameters. Furthermore, instead of directly applying softmax on the columns of A, we first sum over the contributions of each element xi to obtain a vector a = { ∑ iAi,j}nj=1, which is then passed to the softmax function. Following Vaswani et al. (2017), we prevent the softmax function from reaching regions of low gradient by scaling its argument by ( √ dim(A))−1 = (1/n): ã = softmax((1 / n) a). Lastly, we counteract the contractive behaviour of the softmax function by normalizing ã before weighting x as to preserve the magnitude range of its argument. This allows us to use A in deep architectures. Our compact self-attention mechanism is summarized as follows:
a = { ∑ iAi,j} n j=1 = ∑ i(x
T Ã)i,j = xà (6) ã = softmax((1 / n) a) (7)
x̂ = A(x) = (ã /max(ã)) x (8) The cyclic equivariant self-attention operator AC . Consider {x(u, r, λ)}rmaxr=1, the vector of responses generated by a roto-translational convolution fR stacked along the rotation axis r. By applying self-attention along r, we are able to generate an importance matrix A ∈ Rrmax×rmax relating all pairs of (θi, θj)-rotated responses in the rotational group Θ at a certain position. We refer to this attention mechanism as full self-attention (AF ). Although AF is able to encode arbitrary linear source-target relationships for each target position, it is not restricted to conserve equivariance to Θ. Resultantly, we risk incurring into the behavior outlined in Fig. 2c. Before we further elaborate on this issue, we introduce the cyclic permutation operator Pi, which induces a cyclic shift of i positions on its argument: σP i
(xj) = x(j+i)mod(dim(x)), ∀xj ∈ x.
Consider a full self-attention operator AF acting on top of a roto-translational convolution fR. Let p be an input pattern to which fR only produces a strong activation in the feature map x(r̂) = fR(p)(r̂), r̂ ∈ {r}rmaxr=1. Intuitively, during learning, only the corresponding attention coefficients Ã;,r̂ in AF would be significantly increased. Now, consider the presence of the input pattern θip, a θi-rotated variant of p. By virtue of the rotational equivariance property of the feature mapping fR, we obtain (locally) an exactly equal response to that of p up to a cyclic permutation of i positions on r, and thus, we obtain a strong activation in the feature map Pi(x(r̂)) = x(σPi(r̂)). We encounter two problems in this setting: AF is not be able to detect that p and θip correspond to the exact same input pattern and, as each but the attention coefficients Ã:,j is small, the network might considerably damp the response generated by θip. As a result, the network might (1) squander important feedback information during learning and (2) induce learning of repeated versions of the same pattern for different orientations. In other words, AF does not behave equivariantly as a function of θi. Interestingly, we are able to introduce prior-knowledge into the attention model by restricting the structure of Ã. By leveraging the idea of equivariance to the cyclic group Cn, we are able to solve the problems exhibited byAF and simultaneously reduce the number of additional parameters required by the self-attention mechanism (from r2max to rmax). Consider again the input patterns p and θip. We incorporate the intuition that p and θip are one and the same entity, and thus, fR (locally) generates the same output feature map up to a cyclic permutation Pi: fR(θip) = Pi(fR(p)). Consequently, the attention mechanism should produce the exact same output for both p and θip up to the same cyclic permutation Pi. In other words,A (and thus Ã) should be equivariant to cyclic permutations.
5Technically, each column of A is restricted to a simplex and hence A lives in a subspace of [0, 1]n×m.
A well-known fact in mathematics is that a matrix A is equivariant with respect to cyclic permutations of the domain if and only if it is circulant (Alkarni, 2001; Åhlander & Munthe-Kaas, 2005). We make use of this certitude and leverage the concept of circulant matrices to impose cyclic equivariance to the structure of Ã. Formally, a circulant matrix C ∈ Rn×n is composed of n cyclic permutations of its defining vector c = {ci}ni=1, such that its j-th column is a cyclic permutation of j − 1 positions of c: C:,j = Pj−1(c)T . We construct our cyclic equivariant self-attention operator AC by defining à as a circulant matrix specified by a learnable attention vector aC = {aCi } rmax i=1:
à = {Pj−1(aC)T }nj=1 (9)
and subsequently applying Eqs. 6 - 8. Resultantly, AC is able to assign the responses generated by fR for rotated versions of an input pattern p to a unique entity: fR(θip) = Pi(fR(p)), and dynamically adjust its output to the angle of appearance θi, such that the attention operation does not disrupt its propagation downstream the network: AC(fR(θip)) = Pi(AC(fR(p))). Consequently, the attention weights aC are updated equally regardless of specific values of θi. Due to these properties, AC does not incur in any of the problems outlined earlier in this section. Conclusively, our co-attentive rotation equivariant feature mapping f (l)R is defined as follows:
x(l+1)(u, r, λ) = f (l) R (x (l))(u, r, λ) = AC(l)λ ( [x(l) ?R2oΘ W (l) λ′,λ])(u, r, λ) (10)
Note that a co-attentive equivariant feature mapping fR is approximately equal (up to a normalized softmax operation (Eq. 8)) to a conventional equivariant one fR, if à = αI for any α ∈ R.
3.2 EXTENDING AC TO MULTIPLE SYMMETRY GROUPS
The self-attention mechanisms outlined in the previous section are easily extendable to larger groups consisting of multiple symmetries. Consider, for instance, the group θrm of rotations by θr degrees and mirror reflections m defined analogously to the group p4m in Cohen & Welling (2016). Let p(u, r,m, λ) be an input signal with an affixed index m ∈ {m0,m1} for mirror reflections (m1 indicates mirrored) and fθrm be a group convolution (Cohen & Welling, 2016) on the θrm group. The group convolution fθrm produces two times as many output channels (2rmax : m0rmax+m1rmax) as those generated by the roto-translational convolution fR (Eq. 3, Fig. 3).
Full self-attention AF can be integrated directly by modulating the output of fθrm as depicted in Sec. 3.1 with à ∈ R2rmax×2rmax . Here, AF relates each of the group convolution responses with one another. However, just as for fR, AF disrupts the equivariance property of fθrm to the θrm group. Similarly, the cyclic equivariant self-attention operator AC can be extended to multiple symmetry groups as well. Before we continue, we introduce the cyclic permutation operator Pi,t, which induces a cyclic shift of i positions on its argument along the transformation axis t. Consider the input patterns p and θip outlined in the previous section and mp, a mirrored instance of p. Let x(u, r,m, λ) = fθrm(p)(u, r,m, λ) be the response of the group convolution fθrm for the input pattern p. By virtue of the rotation equivariance property of fθrm, the generated response for θip is equivalent to that of p up to a cyclic permutation of i positions along the rotation axis r: fθrm(θip)(u, r,m, λ) = Pi,r(fθrm(p))(u, r,m, λ) = x(u, σP i
(r),m, λ). Similarly, by virtue of the mirror equivariance property of fθrm, the response generated by mp is equivalent to that of p up to a cyclic permutation of one position along the mirroring axis m: fθrm(mp)(u, r,m, λ) = P1,m(fθrm(p))(u, r,m, λ) = x(u, r, σP 1
(m), λ). Note that if we take two elements from a group g, h, their composition (gh) is also an element of the group. Resultantly, fθrm((mθ
i)p)(u, r,m, λ) = (P1,m ◦ Pi,r)(fθrm(p))(u, r,m, λ) = P1,m(Pi,r(x))(u, r,m, λ) = P1,m(x)(u, σPi(r),m, λ) = x(u, σPi(r), σP1(m), λ). In other words, in order to extend AC to the θrm group, it is necessary to restrict the structure of à such that it respects the permutation laws imposed by the equivariant mapping fθrm. Let us rewrite x(u, r,m, λ) as x(u, g, λ), g = (mr) ∈ {m0,m1} × {r̂}rmaxr̂=1. In this case, we must impose a circulant block matrix structure on à such that: (1) the composing blocks permute internally as defined by Pi,r and (2) the blocks themselves permute with one another as defined by P1,m. Formally, à is defined as:
à = [ Ã1 Ã2 Ã2 Ã1 ] (11)
where {Ãi ∈ Rrmax×rmax}, i ∈ {1, 2} are circulant matrices (Eq. 9). Importantly, the ordering of the permutation laws in à is interchangeable if the input vector is modified accordingly, i.e., g = (rm).
Conclusively, cyclic equivariant self-attention AC is directly extendable to act on any G-equivariant feature mapping fG, and for any symmetry group G, if the group actions TYg produce cyclic permutations on the codomain of fG. To this end, one must restrict the structure of à to that of a circulant block matrix, such that all permutation laws of TYg hold: T Y g (AC(fG)) = AC(TYg (fG)), ∀g ∈ G.
4 EXPERIMENTS
Experimental Setup. We validate our approach by exploring the effects of co-attentive equivariant feature mappings for single and multiple symmetry groups on existing equivariant architectures. Specifically, we replace conventional rotation equivariant mappings in p4-CNNs (Cohen & Welling, 2016) and DRENs (Li et al., 2018) with co-attentive equivariant ones and evaluate their effects in fully (rotated MNIST) and partially (CIFAR-10) rotational settings. Similarly, we evaluate coattentive equivariant maps acting on multiple similarity groups by replacing equivariant mappings in p4m-CNNs (Cohen & Welling, 2016) (equivariant to rotation and mirror reflections) likewise. Unless otherwise specified, we replicate as close as possible the same data processing, initialization strategies, hyperparameter values and evaluation strategies utilized by the baselines in our experiments. Note that the goal of this paper is to study and evaluate the relative effects obtained by co-attentive equivariant networks with regard to their conventional counterparts. Accordingly, we do not perform any additional tuning relative to the baselines. We believe that improvements on our reported results are feasible by performing further parameter tuning (e.g., on the network structure or the used hyperparameters) on the proposed co-attentive equivariant networks.
The additional learnable parameters, i.e., those associated to the cyclic self-attention operator (Ã) are initialized identically to the rest of the layer. Subsequently, we replace the values of à along the diagonal by 1 (i.e., diag(Ãinit) = 1) such that Ãinit approximately resembles the identity I and, hence, co-attentive equivariant layers are initially approximately equal to equivariant ones.
Rotated MNIST. The rotated MNIST dataset (Larochelle et al., 2007) contains 62000 gray-scale 28x28 handwritten digits uniformly rotated on the entire circle [0, 2π). The dataset is split into training, validation and tests sets of 10000, 2000 and 50000 samples, respectively. We replace rotation equivariant layers in p4-CNN (Cohen & Welling, 2016), DREN and DRENMaxPooling (Li et al., 2018) with co-attentive ones. Our results show that co-attentive equivariant networks consistently outperform conventional ones (see Table 1).
CIFAR-10. The CIFAR-10 dataset (Krizhevsky et al., 2009) consists of 60000 real-world 32x32 RGB images uniformly drawn from 10 classes. Contrarily to the rotated MNIST dataset, this dataset does not exhibit rotation symmetry. The dataset is split into training, validation and tests sets of 40000, 10000 and 10000 samples, respectively. We replace equivariant layers in the p4 and p4m variations of the All-CNN (Springenberg et al., 2014) and the ResNet44 (He et al., 2016) proposed by Cohen & Welling (2016) with co-attentive ones. Likewise, we modify the r x4-variations of the NIN (Lin et al., 2013) and ResNet20 (He et al., 2016) models proposed by Li et al. (2018) in the same manner. Our results show that co-attentive equivariant networks consistently outperform conventional ones in this setting as well (see Table 1).
Training convergence of equivariant networks. Li et al. (2018) reported that adding too many rotational equivariant (isotonic) layers decreased the performance of their models on CIFAR-10. As a consequence, they did not report results on fully rotational equivariant networks for this setting and attributed this behaviour to the non-symmetricity of the data. We noticed that, with equal initialization strategies, rotational equivariant CNNs were much more prone to divergence than ordinary CNNs. This behaviour can be traced back to the additional feedback resulting from roto-translational convolutions (Eq. 3) compared to ordinary ones (Eq. 2). After further analysis, we noticed that the data preprocessing strategy utilized by Li et al. (2018) leaves some very large outlier values in the data (|x| >100), which strongly contribute to the behaviour outlined before. In order to evaluate the relative contribution of co-attentive equivariant neural networks we constructed fully equivariant DREN architectures based on their implementation. Although the obtained results were much worse than those originally reported in Li et al. (2018), we were able to stabilize
training by clipping input values to the 99 percentile of the data (|x| ≤2.3) and reducing the learning rate to 0.01, such that the same hyperparameters could be used across all network types. The obtained results (see Table 1) signalize that DREN networks are comparatively better than CNNs both in fully and partially rotational settings, contradictorily to the conclusions drawn in Li et al. (2018).
This behaviour elucidates that although the inclusion of equivariance to larger transformation groups is beneficial both in terms of accuracy and parameter efficiency, one must be aware that such benefits are directly associated with an increase of the network susceptibility to divergence during training.
5 DISCUSSION AND FUTURE WORK
Our results show that co-attentive equivariant feature mappings can be utilized to enhance conventional equivariant ones. Interestingly, co-attentive equivariant mappings are beneficial both in partially and fully rotational settings. We attribute this to the fact that a set of co-occurring orientations between patterns can be easily defined (and exploited) in both settings. It is important to note that we utilized attention independently over each spatial position u on the codomain of the corresponding group convolution. Resultantly, we were restricted to mappings of the form xA, which, in turn, constraint our attention mechanism to have a circulant structure in order to preserve equivariance (since group actions acting in the codomain of the group convolution involve cyclic permutations and cyclic self-attention is applied in the codomain of the group convolution).
In future work, we want to extend the idea presented here to act on the entire group simultaneously (i.e., along u as well). By doing so, we lift our current restriction to mappings of the form xA and therefore, may be able to develop attention instances with enhanced descriptive power. Following the same line of though, we want to explore incorporating attention in the convolution operation itself. Resultantly, one is not restricted to act exclusively on the codomain of the convolution, but instead, is able to impose structure in the domain of the mapping as well. Naturally, such an approach could lead to enhanced descriptiveness of the incorporated attention mechanism. Moreover, we want to utilize and extend more complex attention strategies (e.g., Bahdanau et al. (2014); Luong et al. (2015); Vaswani et al. (2017); Mishra et al. (2017)) such that they can be applied to large transformation groups without disrupting equivariance. As outlined earlier in Section 3.1, this becomes very challenging from a computational perspective as well, as it requires extensive usage of the corresponding attention mechanism. Resultantly, an efficient implementation thereof is mandatory. Furthermore, we want to extend co-attentive equivariant feature mappings to continuous (e.g., Worrall et al. (2017)) and 3D space (e.g., Cohen et al. (2018); Worrall & Brostow (2018); Cohen et al. (2019)) groups, and for applications other than visual data (e.g., speech recognition).
Finally, we believe that our approach could be refined and extended to a first step towards dealing with the enumeration problem of large groups (Gens & Domingos, 2014), such that functions acting on the group (e.g., group convolution) are approximated by evaluating them on the set of cooccurring transformations as opposed to on the entire group. Such approximations are expected to be very accurate, as non-co-occurrent transformations are rare. This could be though of as sharping up co-occurrent attention to co-occurrent restriction.
6 CONCLUSION
We have introduced the concept of co-attentive equivariant feature mapping and applied it in the context of equivariant neural networks. By attending to the co-occurrence envelope of the data, we are able to improve the performance of conventional equivariant ones on fully (rotated MNIST) and partially (CIFAR-10) rotational settings. We developed cyclic equivariant self-attention, an attention mechanism able to attend to the co-occurrence envelope of the data without disrupting equivariance to a large set of transformation groups (i.e., all transformation groups G, whose action in the codomain of a G-equivariant feature mapping produce cyclic permutations). Our obtained results support the proposed co-occurrence envelope hypothesis.
ACKNOWLEDGMENTS
We gratefully acknowledge Jan Klein, Emile van Krieken, Jakub Tomczak and our anonymous reviewers for their helpful and valuable commentaries. This work is part of the Efficient Deep Learning (EDL) programme (grant number P16-25), which is partly founded by the Dutch Research Council (NWO) and Semiotic Labs.
A OBTAINING CO-OCCURRENT ATTENTION VIA EQUATION 5
In this section, we provide a meticulous description on how co-occurrent attention is obtained via the method presented in the paper. Intuitively, a direct approach to address the problem illustrated in the introduction (Section 1) and Figure 2 requires an attention mechanism that acts simultaneously on r and λ (see Eq. 3). However, we illustrate how the depicted problem can be simplified such that attention along r is sufficient by taking advantage of the equivariance property of the network.
Let p be the input of a roto-translational convolution fR : Z2×Θ×Λ0 → Z2×Θ×Λ1 as defined in Eq. 3, and Θ be the set of rotations by θr degrees: Θ = {θr = r 2πrmax } rmax r=1. Let fR(p)(u) ∈ Rrmax×Λ1 be the matrix consisting of the rmax oriented responses for each λ ∈ Λ1 learned representation at a certain position u. Since the vectors fR(p)(u, λ) ∈ Rrmax , λ ∈ Λ1 permute cyclically as a result of the rotation equviariance property of fR, it is mandatory to ensure equivariance to cyclic permutations for each fR(p)(u, λ) during the course of the attention procedure (see Section 3).
At first sight, one is inclined to think that there is no connection between multiple vectors fR(p)(u, λ) in fR(p)(u), and, therefore, in order to exploit co-occurences, one must impose additional constraints along the λ axis. However, there is indeed an implicit restriction in fR(p)(u) along λ resulting from the rotation equivariance property of the mapping fR, which we can take advantage from to simplify the problem at hand. Consider, for instance, the input θip, a θi-rotated version of p. By virtue of the equivariance property of fR, we have (locally) that fR(θip) = Pi(fR(p)). Furthermore, we know that this property must hold for all the learned feature representations fR(p)(u, λ),∀λ ∈ Λ1. Resultantly, we have that:
fR(θip)(u, r, λ) = Pi(fR(p)(u, r, λ)) , ∀λ ∈ Λ1 (12) In other words, if one of the learned mappings fR(p)(u, r, λ) experiences a permutation Pi along r, all the learned representations fR(p)(u, r, λ), ∀λ ∈ Λ1 must experience the exact same permutation Pi as well. Resultantly, the equivariance property of the mapping fR ensures that all the Λ1 learned feature representations fR(p)(u, λ) “move synchronously” as a function of input rotation θi.
Likewise, if we apply a cyclic equivariant attention mechanism ACλ independently on top of each λ learned representation fR(p)(u, λ), we obtain that the relation
ACλ(fR(θip))(u, r, λ) = Pi(ACλ(fR(p))(u, r, λ)) , ∀λ ∈ Λ1 (13) must hold as well. Similarly to the case illustrated in Eq. 12 and given that ACλ is equivariant to cyclic permutations on the domain, we obtain that all the Λ1 learned attention masks ACλ “move synchronously” as a function of input rotation θi as well (see Fig. 4).
From Eq. 13 and Figure 4, one can clearly see that by utilizingACλ independently along r and taking advantage from the fact that all Λ1 learned feature representations are tied with one another via fR, one is able to prioritize learning of feature representations that co-occur together as opposed to the much looser formulation in Eq. 12, where feedback is obtained from all orientations. | 1. What is the main contribution of the paper regarding group equivariance and self-attention?
2. What are the strengths and weaknesses of the proposed method, particularly in terms of its experimental results and writing clarity?
3. How does the paper's approach differ from previous works in the field, especially those focusing on mathematical aspects rather than neuroscientific perspectives?
4. Can the authors provide more explanations or examples to help readers better understand technical terms such as "co-occurrence" and "dynamically learn"?
5. Is the co-occurrence envelope hypothesis a novel contribution of the paper, and if so, how does it relate to the authors' conclusions about learned feature representation and approximate equivariance?
6. Could the authors elaborate on their definition of the co-occurrence envelope and provide layman's terms explanation for it?
7. Are there any specific parts of the paper that need improvement in terms of clarity, such as the section on identifying the co-occurrence envelope or the equation 8 second equality?
8. Does the paper use row-vector convention, and if so, how does it affect the math?
9. How does the paper's subspace restriction on the matrix A impact the results, and is it necessary to specify this restriction?
10. Would it have been more straightforward to explain the method by saying they used a roto-translation or p4m equivariant CNN with attention after each convolution, deriving the constraint on attention matrices from there? | Review | Review
[Post-rebuttal update]
Having read the rebuttals and seen the new draft, the authors have answered a lot of my concerns. I am still unsatisfied about the experimental contribution, but I guess producing a paper full of theory and good experiments is a tall ask. Having also read through the concerns of the other reviews and the rebuttal to them, I have decided to upgrade my review to a 6.
*Paper summary*
The paper combines attention with group equivariance, specifically looking at the p4m group of rotations, translations, and flips. The basic premise is to use a group equivariant CNN of, say, Cohen and Welling (2016), and use self-attention on top. The authors derive a form of self-attention that does not destroy the equivariance property.
*Paper decision*
I have decided that the paper be given a weak reject. The method seems sound and I think this in itself is a great achievement., But the experiments lack focus. Just showing that you get better accuracy results does not actually test why attention helps in an equivariant setting. That said, I feel the lack of clarity in the writing is actually the main drawback. The maths is poorly explained and the technical jargon is quite confusing. I think this can be improved in a camera-ready version or in submission to a later conference, should overall acceptance not be met.
*Supporting arguments*
I enjoyed the motivation and discussion on equivariance from a neuroscientific perspective. This is something I have not seen much of in the recent literature (which is more mathematical in nature) and serves as a refreshing take on the matter. There was a good review of the neuroscientific literature and I felt that the conclusions, which were draw (of approximate equivariance, and learned canonical transformations) were well motivated by these paper.
The paper is well structured. That said, I found the clarity of the technical language at times quite difficult to follow because terms were not defined. By way of example, I still have trouble understanding terms like “co-occurence” or “dynamically learn”. In the co-occurence envelope hypothesis, for instance, what does it mean for a learned feature representation to be “optimal in the set of transformations that co-occur”. Against what metric exactly would a representation be optimal? This is not defined.
That said, I feel that the content and conclusions of the paper are technically sound, having followed the maths, because the text was too confusing.
*Questions/notes for the authors*
- I would like to know whether the co-occurence envelope hypothesis is the authors’ own contribution. This was not apparent to me from the text.
- I’m not sure what exactly the co-occurence envelope is. It does not seem to be defined very precisely. What is it in layman’s terms?
- I found the section “Identifying the co-occurence envelope” very confusing. I’m not sure what the authors are trying to explain here. Is it that a good feature representation of a face would use the *relevant* offsets/rotations/etc. of visual features from different parts of the face, independent of global rotation?
- Is Figure 1 supposed to be blurry?
- At the end of paragraph 1 you have written: sdfgsdfg asdfasdf. Please delete this.
- I believe equation 4 is a roto-translational convolution since it is equivariant to rotation *and translation*. Furthermore, it is not exactly equivariant due to the fact that you are defining input on a 2D square grid, but that is a minor detail in the context of this work.
- Now that we have automatic differentiation, is the section on how to work out the gradients in Equations 5-7 really necessary?
- In equation 8 (second equality), you have said f_R^l(F^l) = A(f_R^l(F^l)). How can this be true if A is not the identity? Giving the benefit of the doubt, this could just be a typo.
- Please define \odot (I think it’s element-wise multiplication).
- Are you using row-vector convention? That would resolve some of my confusion with the maths.
- You define the matrix A as in the space [0,1]^{n x m}. While sort of true, it is more precise to note that each column is actually restricted to a simplex, so A lives in a subspace of [0,1]^{n x m}.
- I think it would have been easier just say that you are using a roto-translation or p4m equivariant CNN with attention after each convolution. Then you could derive the constraint on the attention matrices to maintain equivariance. It would be easier to follow and make easy connections with existing literature on the topic. |
ICLR | Title
Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data
Abstract
Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping. Even though some combinations of transformations might never appear (e.g., an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations. Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition. Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries. We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10.
1 INTRODUCTION
Thorough experimentation in the fields of psychology and neuroscience has provided support to the intuition that our visual perception and cognition systems are able to identify familiar objects despite modifications in size, location, background, viewpoint and lighting (Bruce & Humphreys, 1994). Interestingly, we are not just able to recognize such modified objects, but are able to characterize which modifications have been applied to them as well. As an example, when we see a picture of a cat, we are not just able to tell that there is a cat in it, but also its position, its size, facts about the lighting conditions of the picture, and so forth. Such observations suggest that the human visual system is equivariant to a large transformation group containing translation, rotation, scaling, among others. In other words, the mental representation obtained by seeing a transformed version of an object, is equivalent to that of seeing the original object and transforming it mentally next.
These fascinating abilities exhibited by biological visual systems have inspired a large field of research towards the development of neural architectures able to replicate them. Among these, the most popular and successful approach is the Convolutional Neural Network (CNN) (LeCun et al., 1989), which incorporates equivariance to translation via convolution. Unfortunately, in counterpart to the human visual system, CNNs do not exhibit equivariance to other transformations encountered in visual data (e.g., rotations). Interestingly, however, if an ordinary CNN happens to learn rotated copies of the same filter, the stack of feature maps becomes equivariant to rotations even though individual feature maps are not (Cohen & Welling, 2016). Since ordinary CNNs must learn such rotated copies independently, they effectively utilize an important number of network parameters suboptimally to this end (see Fig. 3 in Krizhevsky et al. (2012)). Based on the idea that equivariance in CNNs can be extended to larger transformation groups by stacking convolutional feature maps, several approaches have emerged to extend equivariance to, e.g., planar rotations (Dieleman et al., 2016; Marcos et al., 2017; Weiler et al., 2018; Li et al., 2018), spherical rotations (Cohen et al., 2018; Worrall & Brostow, 2018; Cohen et al., 2019), scaling (Marcos et al., 2018; Worrall & Welling, 2019) and general transformation groups (Cohen & Welling, 2016), such that transformed copies of a single entity are not required to be learned independently.
Although incorporating equivariance to arbitrary transformation groups is conceptually and theoretically similar1, evidence from real-world experiences motivating their integration might strongly differ. Several studies in neuroscience and psychology have shown that our visual system does not react equally to all transformations we encounter in visual data. Take, for instance, translation and rotation. Although we easily recognize objects independently of their position of appearance, a large corpus of experimental research has shown that this is not always the case for in-plane rotations. Yin (1969) showed that mono-oriented objects, i.e., complex objects such as faces which are customarily seen in one orientation, are much more difficult to be accurately recognized when presented upsidedown. This behaviour has been reproduced, among others, for magazine covers (Dallett et al., 1968), symbols (Henle, 1942) and even familiar faces (e.g., from classmates) (Brooks & Goldstein, 1963). Intriguingly, Schwarzer (2000) found that this effect exacerbates with age (adults suffer from this effect much more than children), but, adults are much faster and accurate in detecting mono-oriented objects in usual orientations. Based on these studies, we draw the following conclusions:
• The human visual system does not perform (fully) equivariant feature transformations to visual data. Consequently, it does not react equally to all possible input transformations encountered in visual data, even if they belong to the same transformation group (e.g., in-plane rotations).
• The human visual system does not just encode familiarity to objects but seems to learn through experience the poses in which these objects customarily appear in the environment to assist and improve object recognition (Freire et al., 2000; Riesenhuber et al., 2004; Sinha et al., 2006).
Complementary studies (Tarr & Pinker, 1989; Oliva & Torralba, 2007) suggest that our visual system encodes orientation atypicality relative to the context rather than on an absolute manner (Fig. 1). Motivated by the aforementioned observations we state the co-occurrence envelope hypothesis:
The Co-occurrence Envelope Hypothesis. By allowing equivariant feature mappings to detect transformations that co-occur in the data and focus learning on the set formed by these co-occurrent transformations (i.e., the co-occurrence envelope of the data), one is able to induce learning of more representative feature representations of the data, and, resultantly, enhance the descriptive power of neural networks utilizing them. We refer to one such feature mapping as co-attentive equivariant.
Identifying the co-occurrence envelope. Consider a rotation equivariant network receiving two copies of the same face (Fig. 2a). A conventional rotation equivariant network is required to perform inference and learning on the set of all possible orientations of the visual patterns constituting a face regardless of the input orientation (Fig. 2b). However, by virtue of its rotation equivariance, it is able to recognize rotated faces even if it is trained on upright faces only. A possible strategy to simplify the task at hand could be to restrict the network to react exclusively to upright faces (Fig. 2c). In this case, the set of relevant visual pattern orientations becomes much smaller, at the expense of disrupting equivariance to the rotation group. Resultantly, the network would risk becoming unable to detect faces in any other orientation than those it is trained on. A better strategy results from restricting the set of relevant pattern orientations by defining them relative to one another
1It is achieved by developing feature mappings that utilize the transformation group in the feature mapping itself (e.g., translating a filter in the course of a feature transformation is used to obtain translation equivariance).
(e.g., mouth orientation relative to the eyes) as opposed to absolutely (e.g., upright mouth) (Fig. 2d). In such a way, we are able to exploit information about orientation co-occurrences in the data without disrupting equivariance. The set of co-occurrent orientations in Fig. 2d corresponds to the co-occurrence envelope of the samples in Fig. 2a for the transformation group defined by rotations.
In this work, we introduce co-attentive equivariant feature mappings and apply them on existing equivariant neural architectures. To this end, we leverage the concept of attention (Bahdanau et al., 2014) to modify existing mathematical frameworks for equivariance, such that co-occurrent transformations can be detected. It is critical not to disrupt equivariance in the attention procedure as to preserve it across the entire network. To this end, we introduce cyclic equivariant self-attention, a novel attention mechanism able to preserve equivariance to cyclic groups.
Experiments and results. We explore the effects of co-attentive equivariant feature mappings for single and multiple symmetry groups. Specifically, we replace conventional rotation equivariant mappings in p4-CNNs (Cohen & Welling, 2016) and DRENs (Li et al., 2018) with co-attentive ones. We show that co-attentive rotation equivariant neural networks consistently outperform their conventional counterparts in fully (rotated MNIST) and partially (CIFAR-10) rotational settings. Subsequently, we generalize cyclic equivariant self-attention to multiple similarity groups and apply it on p4m-CNNs (Cohen & Welling, 2016) (equivariant to rotation and mirror reflections). Our results are in line with those obtained for single symmetry groups and support our stated hypothesis.
Contributions.
• We propose the co-occurrence envelope hypothesis and demonstrate that conventional equivariant mappings are consistently outperformed by our proposed co-attentive equivariant ones.
• We generalize co-attentive equivariant mappings to multiple symmetry groups and provide, to the best of our knowledge, the first attention mechanism acting generally on symmetry groups.
2 PRELIMINARIES
Equivariance. We say that a feature mapping f : X → Y is equivariant to a (transformation) group G (orG-equivariant) if it commutes with actions of the groupG acting on its domain and codomain:
f(TXg (x)) = T Y g (f(x)) ∀g ∈ G, x ∈ X (1)
where T (·)g denotes a group action in the corresponding space. In other words, the ordering in which we apply a group action Tg and the feature mapping f is inconsequential. There are multiple reasons as of why equivariant feature representations are advantageous for learning systems. Since group actions TXg produce predictable and interpretable transformations T Y g in the feature space, the hypothesis space of the model is reduced (Weiler et al., 2018) and the learning process simplified (Worrall et al., 2017). Moreover, equivariance allows the construction of L-layered networks by
stacking several equivariant feature mappings {f (1), ..., f (l), ..., f (L)} such that the input structure as regarded by the group G is preserved (e.g., CNNs and input translations). As a result, an arbitrary intermediate network representation (f (l) ◦ ... ◦ f (1))(x), l ∈ L, is able to take advantage of the structure of x as well. Invariance is an special case of equivariance in which TYg = IdY , the identity, and thus all group actions in the input space are mapped to the same feature representation.
Equivariant neural networks. In neural networks, the integration of equivariance to arbitrary groups G has been achieved by developing feature mappings f that utilize the actions of the group G in the feature mapping itself. Interestingly, equivariant feature mappings encode equivariance as parameter sharing with respect to G, i.e., the same weights are reused for all g ∈ G. This makes the inclusion of larger groups extremely appealing in the context of parameter efficient networks.
Conventionally, the l-th layer of a neural network receives a signal x(l)(u, λ) (where u ∈ Z2 is the spatial position and λ ∈ Λl is the unstructured channel index, e.g., RGB channels in a color image), and applies a feature mapping f (l) : Z2 × Λl → Z2 × Λl+1 to generate the feature representation x(l+1)(u, λ). In CNNs, the feature mapping f (l) := f (l)T is defined by a convolution
2 (?R2 ) between the input signal x(l) and a learnable convolutional filter W (l)λ′,λ, λ ′ ∈ Λl, λ ∈ Λl+1:
x(l+1)(u, λ) = [x(l) ?R2 W (l) λ′,λ](u, λ) = ∑ λ′,u′ x(l)(u+ u′, λ′)W (l) λ′,λ(u ′) (2)
By sliding W (l)λ′,λ across u, CNNs are able to preserve the spatial structure of the input x through the feature mapping f lT and successfully provide equivariance to the translation group T = (Z2,+).
The underlying idea for the extension of equivariance to larger groups in CNNs is conceptually equivalent to the strategy utilized by LeCun et al. (1989) for translation equivariance. Consider, for instance, the inclusion of equivariance to the set of rotations by θr degrees3: Θ = {θr = r 2πrmax } rmax r=1. To this end, we modify the feature mapping f (l) := f (l)R : Z2×Θ×Λl → Z2×Θ×Λl+1 to include the rotations defined by Θ. Let x(l)(u, r, λ) and W (l)λ′,λ(u, r) be the input and the convolutional filter with an affixed index r for rotation. The roto-translational convolution (?R2oΘ) f (l) R is defined as:
x(l+1)(u, r, λ) = [x(l) ?R2oΘ W (l) λ′,λ](u, r, λ) = ∑ λ′,r′,u′ x(l)(u+ u′, r′, λ′)W (l) λ′,λ(θru ′, r′ − r) (3)
Since f (l)R produces (dim(Θ) = rmax) times more output feature maps than f (l) T , we need to learn much smaller convolutional filters W (l)λ′,λ to produce the same number of output feature channels.
Learning equivariant neural networks. Consider the change of variables g = u, G = Z2, g ∈ G and g = (u, r), G = Z2oΘ, g ∈ G in Eq. 2 and Eq. 3, respectively. In general, neural networks are learned via backpropagation (LeCun et al., 1989) by iteratively applying the chain rule of derivation to update the network parameters. Intuitively, the networks outlined in Eq. 2 and Eq. 3 obtain feedback from all g ∈ G and, resultantly, are inclined to learn feature representations that perform optimally on the entire group G. However, as outlined in Fig. 2 and Section 1, several of those feature combinations are not likely to simultaneously appear. Resultantly, the hypothesis space of the model as defined by Weiler et al. (2018) might be further reduced.
Note that this reasoning is tightly related to existing explanations for the large success of spatial (Xu et al., 2015; Woo et al., 2018; Zhang et al., 2018) and temporal (Luong et al., 2015; Vaswani et al., 2017; Mishra et al., 2017; Zhang et al., 2018) attention in deep learning architectures.
3 CO-ATTENTIVE EQUIVARIANT NEURAL NETWORKS
In this section we define co-attentive feature mappings and apply them in the context of equivariant neural networks (Figure 3). To this end, we introduce cyclic equivariant self-attention and utilize it to construct co-attentive rotation equivariant neural networks. Subsequently, we show that cyclic equivariant self-attention is extendable to larger symmetry groups and make use of this fact to construct co-attentive neural networks equivariant to rotations and mirror reflections.
2Formally it is as a correlation. However, we hold on to the standard deep learning terminology. 3The reader may easily verify that Θ (and hence Z2 o Θ, with (o) the semi-direct product) forms a group.
3.1 CO-ATTENTIVE ROTATION EQUIVARIANT NEURAL NETWORKS
To allow rotation equivariant networks to utilize and learn co-attentive equivariant representations, we introduce an attention operator A(l) on top of the roto-translational convolution f (l)R with which discernment along the rotation axis r of the generated feature responses x(l)(u, r, λ) is possible. Formally, our co-attentive rotation equivariant feature mapping f (l)R is defined as follows:
x(l+1) = f (l) R (x (l)) = A(l)(f (l)R (x (l))) = A(l) ( [x(l) ?R2oΘ W (l) λ′,λ]) (4)
Theoretically, A(l) could be defined globally over f (l)R (x(l)) (i.e., simultaneously along u, r, λ) as depicted in Eq. 4. However, we apply attention locally to: (1) grant the algorithm enough flexibility to attend locally to the co-occurrence envelope of feature representations and, (2) utilize attention exclusively along the rotation axis r, such that our contributions are clearly separated from those possibly emerging from spatial attention. To this end, we apply attention pixel-wise on top of f
(l) R (x (l)) (Eq. 5). Furthermore, we assign a single attention instance A(l)λ to each learned feature representation and utilize it across the spatial dimension of the output feature maps4:
x(l+1)(u, r, λ) = A(l)λ ({x (l+1)(u, r̂, λ)}rmaxr̂=1)(r) (5)
Attention and self-attention. Consider a source vector x = (x1, ..., xn) and a target vector y = (y1, ..., ym). In general, an attention operator A leverages information from the source vector x (or multiple feature mappings thereof) to estimate an attention matrix A ∈ [0, 1]n×m, such that: (1) the elementAi,j provides an importance assessment of the source element xi with reference to the target element yj and (2) the sum of importance over all xi is equal to one: ∑ iAi,j = 1. Subsequently, the matrix A is utilized to modulate the original source vector x as to attend to a subset of relevant source positions with regard to yj : x̃j = (A:,j)T x (where is the Hadamard product). A special case of attention is that of self-attention (Cheng et al., 2016), in which the target and the source vectors are equal (y := x). In other words, the attention mechanism estimates the influence of the sequence x on the element xj for its weighting.
4For a more meticulous discussion on how Eq. 5 attains co-occurrent attention, see Appendix A.
In general, the attention matrix5 A ∈ [0, 1]n×m is constructed via nonlinear space transformations fà : Rn → Rn×m of the source vector x, on top of which the softmax function is applied: A:,j = softmax(fÃ(x):,j). This ensures that the properties previously mentioned hold. Typically, the mappings fà found in literature take feature transformation pairs of x as input (e.g., {s,H} in RNNs (Luong et al., 2015), {Q,K} in self-attention networks (Vaswani et al., 2017)), and perform (non)-linear mappings on top of it, ranging from multiple feed-forward layers (Bahdanau et al., 2014) to several operations between the transformed pairs (Luong et al., 2015; Vaswani et al., 2017; Mishra et al., 2017; Zhang et al., 2018). Due to the computational complexity of these approaches and the fact that we do extensive pixel-wise usage of fà on every network layer, their direct integration in our framework is computationally prohibitive. To circumvent this problem, we modify the usual self-attention formulation as to enhance its descriptive power in a much more compact setting.
Compact local self-attention. Initially, we relax the range of values of A from [0, 1]n×n to Rn×n. This allows us to encode much richer relationships between element pairs (xi, xj) at the cost of less interpretability. Subsequently, we define A = xT Ã, where à ∈ Rn×n is a matrix of learnable parameters. Furthermore, instead of directly applying softmax on the columns of A, we first sum over the contributions of each element xi to obtain a vector a = { ∑ iAi,j}nj=1, which is then passed to the softmax function. Following Vaswani et al. (2017), we prevent the softmax function from reaching regions of low gradient by scaling its argument by ( √ dim(A))−1 = (1/n): ã = softmax((1 / n) a). Lastly, we counteract the contractive behaviour of the softmax function by normalizing ã before weighting x as to preserve the magnitude range of its argument. This allows us to use A in deep architectures. Our compact self-attention mechanism is summarized as follows:
a = { ∑ iAi,j} n j=1 = ∑ i(x
T Ã)i,j = xà (6) ã = softmax((1 / n) a) (7)
x̂ = A(x) = (ã /max(ã)) x (8) The cyclic equivariant self-attention operator AC . Consider {x(u, r, λ)}rmaxr=1, the vector of responses generated by a roto-translational convolution fR stacked along the rotation axis r. By applying self-attention along r, we are able to generate an importance matrix A ∈ Rrmax×rmax relating all pairs of (θi, θj)-rotated responses in the rotational group Θ at a certain position. We refer to this attention mechanism as full self-attention (AF ). Although AF is able to encode arbitrary linear source-target relationships for each target position, it is not restricted to conserve equivariance to Θ. Resultantly, we risk incurring into the behavior outlined in Fig. 2c. Before we further elaborate on this issue, we introduce the cyclic permutation operator Pi, which induces a cyclic shift of i positions on its argument: σP i
(xj) = x(j+i)mod(dim(x)), ∀xj ∈ x.
Consider a full self-attention operator AF acting on top of a roto-translational convolution fR. Let p be an input pattern to which fR only produces a strong activation in the feature map x(r̂) = fR(p)(r̂), r̂ ∈ {r}rmaxr=1. Intuitively, during learning, only the corresponding attention coefficients Ã;,r̂ in AF would be significantly increased. Now, consider the presence of the input pattern θip, a θi-rotated variant of p. By virtue of the rotational equivariance property of the feature mapping fR, we obtain (locally) an exactly equal response to that of p up to a cyclic permutation of i positions on r, and thus, we obtain a strong activation in the feature map Pi(x(r̂)) = x(σPi(r̂)). We encounter two problems in this setting: AF is not be able to detect that p and θip correspond to the exact same input pattern and, as each but the attention coefficients Ã:,j is small, the network might considerably damp the response generated by θip. As a result, the network might (1) squander important feedback information during learning and (2) induce learning of repeated versions of the same pattern for different orientations. In other words, AF does not behave equivariantly as a function of θi. Interestingly, we are able to introduce prior-knowledge into the attention model by restricting the structure of Ã. By leveraging the idea of equivariance to the cyclic group Cn, we are able to solve the problems exhibited byAF and simultaneously reduce the number of additional parameters required by the self-attention mechanism (from r2max to rmax). Consider again the input patterns p and θip. We incorporate the intuition that p and θip are one and the same entity, and thus, fR (locally) generates the same output feature map up to a cyclic permutation Pi: fR(θip) = Pi(fR(p)). Consequently, the attention mechanism should produce the exact same output for both p and θip up to the same cyclic permutation Pi. In other words,A (and thus Ã) should be equivariant to cyclic permutations.
5Technically, each column of A is restricted to a simplex and hence A lives in a subspace of [0, 1]n×m.
A well-known fact in mathematics is that a matrix A is equivariant with respect to cyclic permutations of the domain if and only if it is circulant (Alkarni, 2001; Åhlander & Munthe-Kaas, 2005). We make use of this certitude and leverage the concept of circulant matrices to impose cyclic equivariance to the structure of Ã. Formally, a circulant matrix C ∈ Rn×n is composed of n cyclic permutations of its defining vector c = {ci}ni=1, such that its j-th column is a cyclic permutation of j − 1 positions of c: C:,j = Pj−1(c)T . We construct our cyclic equivariant self-attention operator AC by defining à as a circulant matrix specified by a learnable attention vector aC = {aCi } rmax i=1:
à = {Pj−1(aC)T }nj=1 (9)
and subsequently applying Eqs. 6 - 8. Resultantly, AC is able to assign the responses generated by fR for rotated versions of an input pattern p to a unique entity: fR(θip) = Pi(fR(p)), and dynamically adjust its output to the angle of appearance θi, such that the attention operation does not disrupt its propagation downstream the network: AC(fR(θip)) = Pi(AC(fR(p))). Consequently, the attention weights aC are updated equally regardless of specific values of θi. Due to these properties, AC does not incur in any of the problems outlined earlier in this section. Conclusively, our co-attentive rotation equivariant feature mapping f (l)R is defined as follows:
x(l+1)(u, r, λ) = f (l) R (x (l))(u, r, λ) = AC(l)λ ( [x(l) ?R2oΘ W (l) λ′,λ])(u, r, λ) (10)
Note that a co-attentive equivariant feature mapping fR is approximately equal (up to a normalized softmax operation (Eq. 8)) to a conventional equivariant one fR, if à = αI for any α ∈ R.
3.2 EXTENDING AC TO MULTIPLE SYMMETRY GROUPS
The self-attention mechanisms outlined in the previous section are easily extendable to larger groups consisting of multiple symmetries. Consider, for instance, the group θrm of rotations by θr degrees and mirror reflections m defined analogously to the group p4m in Cohen & Welling (2016). Let p(u, r,m, λ) be an input signal with an affixed index m ∈ {m0,m1} for mirror reflections (m1 indicates mirrored) and fθrm be a group convolution (Cohen & Welling, 2016) on the θrm group. The group convolution fθrm produces two times as many output channels (2rmax : m0rmax+m1rmax) as those generated by the roto-translational convolution fR (Eq. 3, Fig. 3).
Full self-attention AF can be integrated directly by modulating the output of fθrm as depicted in Sec. 3.1 with à ∈ R2rmax×2rmax . Here, AF relates each of the group convolution responses with one another. However, just as for fR, AF disrupts the equivariance property of fθrm to the θrm group. Similarly, the cyclic equivariant self-attention operator AC can be extended to multiple symmetry groups as well. Before we continue, we introduce the cyclic permutation operator Pi,t, which induces a cyclic shift of i positions on its argument along the transformation axis t. Consider the input patterns p and θip outlined in the previous section and mp, a mirrored instance of p. Let x(u, r,m, λ) = fθrm(p)(u, r,m, λ) be the response of the group convolution fθrm for the input pattern p. By virtue of the rotation equivariance property of fθrm, the generated response for θip is equivalent to that of p up to a cyclic permutation of i positions along the rotation axis r: fθrm(θip)(u, r,m, λ) = Pi,r(fθrm(p))(u, r,m, λ) = x(u, σP i
(r),m, λ). Similarly, by virtue of the mirror equivariance property of fθrm, the response generated by mp is equivalent to that of p up to a cyclic permutation of one position along the mirroring axis m: fθrm(mp)(u, r,m, λ) = P1,m(fθrm(p))(u, r,m, λ) = x(u, r, σP 1
(m), λ). Note that if we take two elements from a group g, h, their composition (gh) is also an element of the group. Resultantly, fθrm((mθ
i)p)(u, r,m, λ) = (P1,m ◦ Pi,r)(fθrm(p))(u, r,m, λ) = P1,m(Pi,r(x))(u, r,m, λ) = P1,m(x)(u, σPi(r),m, λ) = x(u, σPi(r), σP1(m), λ). In other words, in order to extend AC to the θrm group, it is necessary to restrict the structure of à such that it respects the permutation laws imposed by the equivariant mapping fθrm. Let us rewrite x(u, r,m, λ) as x(u, g, λ), g = (mr) ∈ {m0,m1} × {r̂}rmaxr̂=1. In this case, we must impose a circulant block matrix structure on à such that: (1) the composing blocks permute internally as defined by Pi,r and (2) the blocks themselves permute with one another as defined by P1,m. Formally, à is defined as:
à = [ Ã1 Ã2 Ã2 Ã1 ] (11)
where {Ãi ∈ Rrmax×rmax}, i ∈ {1, 2} are circulant matrices (Eq. 9). Importantly, the ordering of the permutation laws in à is interchangeable if the input vector is modified accordingly, i.e., g = (rm).
Conclusively, cyclic equivariant self-attention AC is directly extendable to act on any G-equivariant feature mapping fG, and for any symmetry group G, if the group actions TYg produce cyclic permutations on the codomain of fG. To this end, one must restrict the structure of à to that of a circulant block matrix, such that all permutation laws of TYg hold: T Y g (AC(fG)) = AC(TYg (fG)), ∀g ∈ G.
4 EXPERIMENTS
Experimental Setup. We validate our approach by exploring the effects of co-attentive equivariant feature mappings for single and multiple symmetry groups on existing equivariant architectures. Specifically, we replace conventional rotation equivariant mappings in p4-CNNs (Cohen & Welling, 2016) and DRENs (Li et al., 2018) with co-attentive equivariant ones and evaluate their effects in fully (rotated MNIST) and partially (CIFAR-10) rotational settings. Similarly, we evaluate coattentive equivariant maps acting on multiple similarity groups by replacing equivariant mappings in p4m-CNNs (Cohen & Welling, 2016) (equivariant to rotation and mirror reflections) likewise. Unless otherwise specified, we replicate as close as possible the same data processing, initialization strategies, hyperparameter values and evaluation strategies utilized by the baselines in our experiments. Note that the goal of this paper is to study and evaluate the relative effects obtained by co-attentive equivariant networks with regard to their conventional counterparts. Accordingly, we do not perform any additional tuning relative to the baselines. We believe that improvements on our reported results are feasible by performing further parameter tuning (e.g., on the network structure or the used hyperparameters) on the proposed co-attentive equivariant networks.
The additional learnable parameters, i.e., those associated to the cyclic self-attention operator (Ã) are initialized identically to the rest of the layer. Subsequently, we replace the values of à along the diagonal by 1 (i.e., diag(Ãinit) = 1) such that Ãinit approximately resembles the identity I and, hence, co-attentive equivariant layers are initially approximately equal to equivariant ones.
Rotated MNIST. The rotated MNIST dataset (Larochelle et al., 2007) contains 62000 gray-scale 28x28 handwritten digits uniformly rotated on the entire circle [0, 2π). The dataset is split into training, validation and tests sets of 10000, 2000 and 50000 samples, respectively. We replace rotation equivariant layers in p4-CNN (Cohen & Welling, 2016), DREN and DRENMaxPooling (Li et al., 2018) with co-attentive ones. Our results show that co-attentive equivariant networks consistently outperform conventional ones (see Table 1).
CIFAR-10. The CIFAR-10 dataset (Krizhevsky et al., 2009) consists of 60000 real-world 32x32 RGB images uniformly drawn from 10 classes. Contrarily to the rotated MNIST dataset, this dataset does not exhibit rotation symmetry. The dataset is split into training, validation and tests sets of 40000, 10000 and 10000 samples, respectively. We replace equivariant layers in the p4 and p4m variations of the All-CNN (Springenberg et al., 2014) and the ResNet44 (He et al., 2016) proposed by Cohen & Welling (2016) with co-attentive ones. Likewise, we modify the r x4-variations of the NIN (Lin et al., 2013) and ResNet20 (He et al., 2016) models proposed by Li et al. (2018) in the same manner. Our results show that co-attentive equivariant networks consistently outperform conventional ones in this setting as well (see Table 1).
Training convergence of equivariant networks. Li et al. (2018) reported that adding too many rotational equivariant (isotonic) layers decreased the performance of their models on CIFAR-10. As a consequence, they did not report results on fully rotational equivariant networks for this setting and attributed this behaviour to the non-symmetricity of the data. We noticed that, with equal initialization strategies, rotational equivariant CNNs were much more prone to divergence than ordinary CNNs. This behaviour can be traced back to the additional feedback resulting from roto-translational convolutions (Eq. 3) compared to ordinary ones (Eq. 2). After further analysis, we noticed that the data preprocessing strategy utilized by Li et al. (2018) leaves some very large outlier values in the data (|x| >100), which strongly contribute to the behaviour outlined before. In order to evaluate the relative contribution of co-attentive equivariant neural networks we constructed fully equivariant DREN architectures based on their implementation. Although the obtained results were much worse than those originally reported in Li et al. (2018), we were able to stabilize
training by clipping input values to the 99 percentile of the data (|x| ≤2.3) and reducing the learning rate to 0.01, such that the same hyperparameters could be used across all network types. The obtained results (see Table 1) signalize that DREN networks are comparatively better than CNNs both in fully and partially rotational settings, contradictorily to the conclusions drawn in Li et al. (2018).
This behaviour elucidates that although the inclusion of equivariance to larger transformation groups is beneficial both in terms of accuracy and parameter efficiency, one must be aware that such benefits are directly associated with an increase of the network susceptibility to divergence during training.
5 DISCUSSION AND FUTURE WORK
Our results show that co-attentive equivariant feature mappings can be utilized to enhance conventional equivariant ones. Interestingly, co-attentive equivariant mappings are beneficial both in partially and fully rotational settings. We attribute this to the fact that a set of co-occurring orientations between patterns can be easily defined (and exploited) in both settings. It is important to note that we utilized attention independently over each spatial position u on the codomain of the corresponding group convolution. Resultantly, we were restricted to mappings of the form xA, which, in turn, constraint our attention mechanism to have a circulant structure in order to preserve equivariance (since group actions acting in the codomain of the group convolution involve cyclic permutations and cyclic self-attention is applied in the codomain of the group convolution).
In future work, we want to extend the idea presented here to act on the entire group simultaneously (i.e., along u as well). By doing so, we lift our current restriction to mappings of the form xA and therefore, may be able to develop attention instances with enhanced descriptive power. Following the same line of though, we want to explore incorporating attention in the convolution operation itself. Resultantly, one is not restricted to act exclusively on the codomain of the convolution, but instead, is able to impose structure in the domain of the mapping as well. Naturally, such an approach could lead to enhanced descriptiveness of the incorporated attention mechanism. Moreover, we want to utilize and extend more complex attention strategies (e.g., Bahdanau et al. (2014); Luong et al. (2015); Vaswani et al. (2017); Mishra et al. (2017)) such that they can be applied to large transformation groups without disrupting equivariance. As outlined earlier in Section 3.1, this becomes very challenging from a computational perspective as well, as it requires extensive usage of the corresponding attention mechanism. Resultantly, an efficient implementation thereof is mandatory. Furthermore, we want to extend co-attentive equivariant feature mappings to continuous (e.g., Worrall et al. (2017)) and 3D space (e.g., Cohen et al. (2018); Worrall & Brostow (2018); Cohen et al. (2019)) groups, and for applications other than visual data (e.g., speech recognition).
Finally, we believe that our approach could be refined and extended to a first step towards dealing with the enumeration problem of large groups (Gens & Domingos, 2014), such that functions acting on the group (e.g., group convolution) are approximated by evaluating them on the set of cooccurring transformations as opposed to on the entire group. Such approximations are expected to be very accurate, as non-co-occurrent transformations are rare. This could be though of as sharping up co-occurrent attention to co-occurrent restriction.
6 CONCLUSION
We have introduced the concept of co-attentive equivariant feature mapping and applied it in the context of equivariant neural networks. By attending to the co-occurrence envelope of the data, we are able to improve the performance of conventional equivariant ones on fully (rotated MNIST) and partially (CIFAR-10) rotational settings. We developed cyclic equivariant self-attention, an attention mechanism able to attend to the co-occurrence envelope of the data without disrupting equivariance to a large set of transformation groups (i.e., all transformation groups G, whose action in the codomain of a G-equivariant feature mapping produce cyclic permutations). Our obtained results support the proposed co-occurrence envelope hypothesis.
ACKNOWLEDGMENTS
We gratefully acknowledge Jan Klein, Emile van Krieken, Jakub Tomczak and our anonymous reviewers for their helpful and valuable commentaries. This work is part of the Efficient Deep Learning (EDL) programme (grant number P16-25), which is partly founded by the Dutch Research Council (NWO) and Semiotic Labs.
A OBTAINING CO-OCCURRENT ATTENTION VIA EQUATION 5
In this section, we provide a meticulous description on how co-occurrent attention is obtained via the method presented in the paper. Intuitively, a direct approach to address the problem illustrated in the introduction (Section 1) and Figure 2 requires an attention mechanism that acts simultaneously on r and λ (see Eq. 3). However, we illustrate how the depicted problem can be simplified such that attention along r is sufficient by taking advantage of the equivariance property of the network.
Let p be the input of a roto-translational convolution fR : Z2×Θ×Λ0 → Z2×Θ×Λ1 as defined in Eq. 3, and Θ be the set of rotations by θr degrees: Θ = {θr = r 2πrmax } rmax r=1. Let fR(p)(u) ∈ Rrmax×Λ1 be the matrix consisting of the rmax oriented responses for each λ ∈ Λ1 learned representation at a certain position u. Since the vectors fR(p)(u, λ) ∈ Rrmax , λ ∈ Λ1 permute cyclically as a result of the rotation equviariance property of fR, it is mandatory to ensure equivariance to cyclic permutations for each fR(p)(u, λ) during the course of the attention procedure (see Section 3).
At first sight, one is inclined to think that there is no connection between multiple vectors fR(p)(u, λ) in fR(p)(u), and, therefore, in order to exploit co-occurences, one must impose additional constraints along the λ axis. However, there is indeed an implicit restriction in fR(p)(u) along λ resulting from the rotation equivariance property of the mapping fR, which we can take advantage from to simplify the problem at hand. Consider, for instance, the input θip, a θi-rotated version of p. By virtue of the equivariance property of fR, we have (locally) that fR(θip) = Pi(fR(p)). Furthermore, we know that this property must hold for all the learned feature representations fR(p)(u, λ),∀λ ∈ Λ1. Resultantly, we have that:
fR(θip)(u, r, λ) = Pi(fR(p)(u, r, λ)) , ∀λ ∈ Λ1 (12) In other words, if one of the learned mappings fR(p)(u, r, λ) experiences a permutation Pi along r, all the learned representations fR(p)(u, r, λ), ∀λ ∈ Λ1 must experience the exact same permutation Pi as well. Resultantly, the equivariance property of the mapping fR ensures that all the Λ1 learned feature representations fR(p)(u, λ) “move synchronously” as a function of input rotation θi.
Likewise, if we apply a cyclic equivariant attention mechanism ACλ independently on top of each λ learned representation fR(p)(u, λ), we obtain that the relation
ACλ(fR(θip))(u, r, λ) = Pi(ACλ(fR(p))(u, r, λ)) , ∀λ ∈ Λ1 (13) must hold as well. Similarly to the case illustrated in Eq. 12 and given that ACλ is equivariant to cyclic permutations on the domain, we obtain that all the Λ1 learned attention masks ACλ “move synchronously” as a function of input rotation θi as well (see Fig. 4).
From Eq. 13 and Figure 4, one can clearly see that by utilizingACλ independently along r and taking advantage from the fact that all Λ1 learned feature representations are tied with one another via fR, one is able to prioritize learning of feature representations that co-occur together as opposed to the much looser formulation in Eq. 12, where feedback is obtained from all orientations. | 1. What is the main contribution of the paper, and how does it improve upon previous works?
2. How does the proposed attention mechanism work, and how does it differ from traditional self-attention mechanisms?
3. Can you provide more detail on the motivation behind the attention mechanism and how it addresses the problem outlined in Fig. 2?
4. How do the experimental results demonstrate the effectiveness of the proposed method, and how do they compare to baseline approaches?
5. What are some potential limitations or areas for improvement in the proposed approach, and how might they be addressed in future work?
6. How might the proposed attention mechanism be applied to other computer vision tasks or domains? | Review | Review
[Update after rebuttal period]
While I still find the paper somewhat hard to parse, the revision and responses have addressed most of my concerns. I think this paper should be accepted, because it presents a novel and non-trivial concept (rotation-equivariant self attention).
[Original review]
The authors propose a self-attention mechanism for rotation-equivariant neural nets. They show that introduction of this attention mechanisms improves classification performance over regular rotation-equivariant nets on a fully rotational dataset (rotated MNIST) and a regular non-rotational dataset (CIFAR-10).
Strengths:
+ States a clear hypothesis that is well motivated by Figs. 1 & 2
+ Appears to accomplish what it claims as contributions
+ Demonstrates a rotation-equivariant attention mechanism
+ Shows that its introduction improves performance on some tasks
Weaknesses:
- Unclear how the proposed attention mechanism accomplishes the goal outlined in Fig. 2d
- Performance of the authors' evaluations of the baselines is lower than reported in the original papers, casting some doubt on the performance evaluation
- The notation is somewhat confusing and cumbersome, making it hard to understand what exactly the authors are doing
- No visualisation or insights into the attention mechanism are provided
There are three main issues detailed below that I'd like to see addressed in the authors' response and/or a revised version of the paper. If the authors can address these concerns, I am willing to increase my score.
1. The motivation for the attention mechanism (as discussed in the introduction and illustrated in Fig. 2) seems to be to find patterns of features which commonly get activated together (or often co-occur in the training set). However, according to Eq. (9), attention is applied separately to orientations of the same feature ($A_i$ is indexed by i, the channel dimension), and not across different features. Since the attention is applied at each spatial location separately, such mechanism only allows to detect patterns of relative orientations of the same feature appearing at the same spatial location. The motivation and utility of such formulation is unclear, as it appears to be unable to solve the toy problem laid out in Fig. 2. Please clarify how the proposed mechanism would solve the toy example in Fig. 2.
2. The only real argument that the proposed mechanism is useful are the numbers in Table 1. However, the experimental results for CIFAR-10 are hard to compare to the baselines because of differences in reported and reproduced results. I would appreciate a clarification about the code used (was it published by the authors of other papers?) and discussion of why the relative improvement achieved by the proposed method is not an artefact of implementation or optimisation issues.
3. The exposition and notation in section 3.1 is very hard to follow and requires substantial improvement. For instance, the sections "Attention and self attention" and "Compact local self attention" seem to abstract from the specific case and use x and y, but it is unclear to me what x and y map to specifically. Maybe also provide a visualization of how exactly attention is applied.
Minor comments/questions:
- If the attention is applied over the orientations of the same feature, why does it improve the performance on Rotated MNIST (which is rotation invariant)?
- I assume the attention matrix $A_i$ is different for each layer, because the features in different layers are different and require different attention mechanisms. However, unlike F and K, A is not indexed by layer l.
- It would be good to provide the standard deviation for the reported results on CIFAR-10 to see if the improvement is significant. |
ICLR | Title
Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring in Data
Abstract
Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping. Even though some combinations of transformations might never appear (e.g., an upright face with a horizontal nose), current equivariant architectures consider the set of all possible transformations in a transformation group when learning feature representations. Contrarily, the human visual system is able to attend to the set of relevant transformations occurring in the environment and utilizes this information to assist and improve object recognition. Based on this observation, we modify conventional equivariant feature mappings such that they are able to attend to the set of co-occurring transformations in data and generalize this notion to act on groups consisting of multiple symmetries. We show that our proposed co-attentive equivariant neural networks consistently outperform conventional rotation equivariant and rotation & reflection equivariant neural networks on rotated MNIST and CIFAR-10.
1 INTRODUCTION
Thorough experimentation in the fields of psychology and neuroscience has provided support to the intuition that our visual perception and cognition systems are able to identify familiar objects despite modifications in size, location, background, viewpoint and lighting (Bruce & Humphreys, 1994). Interestingly, we are not just able to recognize such modified objects, but are able to characterize which modifications have been applied to them as well. As an example, when we see a picture of a cat, we are not just able to tell that there is a cat in it, but also its position, its size, facts about the lighting conditions of the picture, and so forth. Such observations suggest that the human visual system is equivariant to a large transformation group containing translation, rotation, scaling, among others. In other words, the mental representation obtained by seeing a transformed version of an object, is equivalent to that of seeing the original object and transforming it mentally next.
These fascinating abilities exhibited by biological visual systems have inspired a large field of research towards the development of neural architectures able to replicate them. Among these, the most popular and successful approach is the Convolutional Neural Network (CNN) (LeCun et al., 1989), which incorporates equivariance to translation via convolution. Unfortunately, in counterpart to the human visual system, CNNs do not exhibit equivariance to other transformations encountered in visual data (e.g., rotations). Interestingly, however, if an ordinary CNN happens to learn rotated copies of the same filter, the stack of feature maps becomes equivariant to rotations even though individual feature maps are not (Cohen & Welling, 2016). Since ordinary CNNs must learn such rotated copies independently, they effectively utilize an important number of network parameters suboptimally to this end (see Fig. 3 in Krizhevsky et al. (2012)). Based on the idea that equivariance in CNNs can be extended to larger transformation groups by stacking convolutional feature maps, several approaches have emerged to extend equivariance to, e.g., planar rotations (Dieleman et al., 2016; Marcos et al., 2017; Weiler et al., 2018; Li et al., 2018), spherical rotations (Cohen et al., 2018; Worrall & Brostow, 2018; Cohen et al., 2019), scaling (Marcos et al., 2018; Worrall & Welling, 2019) and general transformation groups (Cohen & Welling, 2016), such that transformed copies of a single entity are not required to be learned independently.
Although incorporating equivariance to arbitrary transformation groups is conceptually and theoretically similar1, evidence from real-world experiences motivating their integration might strongly differ. Several studies in neuroscience and psychology have shown that our visual system does not react equally to all transformations we encounter in visual data. Take, for instance, translation and rotation. Although we easily recognize objects independently of their position of appearance, a large corpus of experimental research has shown that this is not always the case for in-plane rotations. Yin (1969) showed that mono-oriented objects, i.e., complex objects such as faces which are customarily seen in one orientation, are much more difficult to be accurately recognized when presented upsidedown. This behaviour has been reproduced, among others, for magazine covers (Dallett et al., 1968), symbols (Henle, 1942) and even familiar faces (e.g., from classmates) (Brooks & Goldstein, 1963). Intriguingly, Schwarzer (2000) found that this effect exacerbates with age (adults suffer from this effect much more than children), but, adults are much faster and accurate in detecting mono-oriented objects in usual orientations. Based on these studies, we draw the following conclusions:
• The human visual system does not perform (fully) equivariant feature transformations to visual data. Consequently, it does not react equally to all possible input transformations encountered in visual data, even if they belong to the same transformation group (e.g., in-plane rotations).
• The human visual system does not just encode familiarity to objects but seems to learn through experience the poses in which these objects customarily appear in the environment to assist and improve object recognition (Freire et al., 2000; Riesenhuber et al., 2004; Sinha et al., 2006).
Complementary studies (Tarr & Pinker, 1989; Oliva & Torralba, 2007) suggest that our visual system encodes orientation atypicality relative to the context rather than on an absolute manner (Fig. 1). Motivated by the aforementioned observations we state the co-occurrence envelope hypothesis:
The Co-occurrence Envelope Hypothesis. By allowing equivariant feature mappings to detect transformations that co-occur in the data and focus learning on the set formed by these co-occurrent transformations (i.e., the co-occurrence envelope of the data), one is able to induce learning of more representative feature representations of the data, and, resultantly, enhance the descriptive power of neural networks utilizing them. We refer to one such feature mapping as co-attentive equivariant.
Identifying the co-occurrence envelope. Consider a rotation equivariant network receiving two copies of the same face (Fig. 2a). A conventional rotation equivariant network is required to perform inference and learning on the set of all possible orientations of the visual patterns constituting a face regardless of the input orientation (Fig. 2b). However, by virtue of its rotation equivariance, it is able to recognize rotated faces even if it is trained on upright faces only. A possible strategy to simplify the task at hand could be to restrict the network to react exclusively to upright faces (Fig. 2c). In this case, the set of relevant visual pattern orientations becomes much smaller, at the expense of disrupting equivariance to the rotation group. Resultantly, the network would risk becoming unable to detect faces in any other orientation than those it is trained on. A better strategy results from restricting the set of relevant pattern orientations by defining them relative to one another
1It is achieved by developing feature mappings that utilize the transformation group in the feature mapping itself (e.g., translating a filter in the course of a feature transformation is used to obtain translation equivariance).
(e.g., mouth orientation relative to the eyes) as opposed to absolutely (e.g., upright mouth) (Fig. 2d). In such a way, we are able to exploit information about orientation co-occurrences in the data without disrupting equivariance. The set of co-occurrent orientations in Fig. 2d corresponds to the co-occurrence envelope of the samples in Fig. 2a for the transformation group defined by rotations.
In this work, we introduce co-attentive equivariant feature mappings and apply them on existing equivariant neural architectures. To this end, we leverage the concept of attention (Bahdanau et al., 2014) to modify existing mathematical frameworks for equivariance, such that co-occurrent transformations can be detected. It is critical not to disrupt equivariance in the attention procedure as to preserve it across the entire network. To this end, we introduce cyclic equivariant self-attention, a novel attention mechanism able to preserve equivariance to cyclic groups.
Experiments and results. We explore the effects of co-attentive equivariant feature mappings for single and multiple symmetry groups. Specifically, we replace conventional rotation equivariant mappings in p4-CNNs (Cohen & Welling, 2016) and DRENs (Li et al., 2018) with co-attentive ones. We show that co-attentive rotation equivariant neural networks consistently outperform their conventional counterparts in fully (rotated MNIST) and partially (CIFAR-10) rotational settings. Subsequently, we generalize cyclic equivariant self-attention to multiple similarity groups and apply it on p4m-CNNs (Cohen & Welling, 2016) (equivariant to rotation and mirror reflections). Our results are in line with those obtained for single symmetry groups and support our stated hypothesis.
Contributions.
• We propose the co-occurrence envelope hypothesis and demonstrate that conventional equivariant mappings are consistently outperformed by our proposed co-attentive equivariant ones.
• We generalize co-attentive equivariant mappings to multiple symmetry groups and provide, to the best of our knowledge, the first attention mechanism acting generally on symmetry groups.
2 PRELIMINARIES
Equivariance. We say that a feature mapping f : X → Y is equivariant to a (transformation) group G (orG-equivariant) if it commutes with actions of the groupG acting on its domain and codomain:
f(TXg (x)) = T Y g (f(x)) ∀g ∈ G, x ∈ X (1)
where T (·)g denotes a group action in the corresponding space. In other words, the ordering in which we apply a group action Tg and the feature mapping f is inconsequential. There are multiple reasons as of why equivariant feature representations are advantageous for learning systems. Since group actions TXg produce predictable and interpretable transformations T Y g in the feature space, the hypothesis space of the model is reduced (Weiler et al., 2018) and the learning process simplified (Worrall et al., 2017). Moreover, equivariance allows the construction of L-layered networks by
stacking several equivariant feature mappings {f (1), ..., f (l), ..., f (L)} such that the input structure as regarded by the group G is preserved (e.g., CNNs and input translations). As a result, an arbitrary intermediate network representation (f (l) ◦ ... ◦ f (1))(x), l ∈ L, is able to take advantage of the structure of x as well. Invariance is an special case of equivariance in which TYg = IdY , the identity, and thus all group actions in the input space are mapped to the same feature representation.
Equivariant neural networks. In neural networks, the integration of equivariance to arbitrary groups G has been achieved by developing feature mappings f that utilize the actions of the group G in the feature mapping itself. Interestingly, equivariant feature mappings encode equivariance as parameter sharing with respect to G, i.e., the same weights are reused for all g ∈ G. This makes the inclusion of larger groups extremely appealing in the context of parameter efficient networks.
Conventionally, the l-th layer of a neural network receives a signal x(l)(u, λ) (where u ∈ Z2 is the spatial position and λ ∈ Λl is the unstructured channel index, e.g., RGB channels in a color image), and applies a feature mapping f (l) : Z2 × Λl → Z2 × Λl+1 to generate the feature representation x(l+1)(u, λ). In CNNs, the feature mapping f (l) := f (l)T is defined by a convolution
2 (?R2 ) between the input signal x(l) and a learnable convolutional filter W (l)λ′,λ, λ ′ ∈ Λl, λ ∈ Λl+1:
x(l+1)(u, λ) = [x(l) ?R2 W (l) λ′,λ](u, λ) = ∑ λ′,u′ x(l)(u+ u′, λ′)W (l) λ′,λ(u ′) (2)
By sliding W (l)λ′,λ across u, CNNs are able to preserve the spatial structure of the input x through the feature mapping f lT and successfully provide equivariance to the translation group T = (Z2,+).
The underlying idea for the extension of equivariance to larger groups in CNNs is conceptually equivalent to the strategy utilized by LeCun et al. (1989) for translation equivariance. Consider, for instance, the inclusion of equivariance to the set of rotations by θr degrees3: Θ = {θr = r 2πrmax } rmax r=1. To this end, we modify the feature mapping f (l) := f (l)R : Z2×Θ×Λl → Z2×Θ×Λl+1 to include the rotations defined by Θ. Let x(l)(u, r, λ) and W (l)λ′,λ(u, r) be the input and the convolutional filter with an affixed index r for rotation. The roto-translational convolution (?R2oΘ) f (l) R is defined as:
x(l+1)(u, r, λ) = [x(l) ?R2oΘ W (l) λ′,λ](u, r, λ) = ∑ λ′,r′,u′ x(l)(u+ u′, r′, λ′)W (l) λ′,λ(θru ′, r′ − r) (3)
Since f (l)R produces (dim(Θ) = rmax) times more output feature maps than f (l) T , we need to learn much smaller convolutional filters W (l)λ′,λ to produce the same number of output feature channels.
Learning equivariant neural networks. Consider the change of variables g = u, G = Z2, g ∈ G and g = (u, r), G = Z2oΘ, g ∈ G in Eq. 2 and Eq. 3, respectively. In general, neural networks are learned via backpropagation (LeCun et al., 1989) by iteratively applying the chain rule of derivation to update the network parameters. Intuitively, the networks outlined in Eq. 2 and Eq. 3 obtain feedback from all g ∈ G and, resultantly, are inclined to learn feature representations that perform optimally on the entire group G. However, as outlined in Fig. 2 and Section 1, several of those feature combinations are not likely to simultaneously appear. Resultantly, the hypothesis space of the model as defined by Weiler et al. (2018) might be further reduced.
Note that this reasoning is tightly related to existing explanations for the large success of spatial (Xu et al., 2015; Woo et al., 2018; Zhang et al., 2018) and temporal (Luong et al., 2015; Vaswani et al., 2017; Mishra et al., 2017; Zhang et al., 2018) attention in deep learning architectures.
3 CO-ATTENTIVE EQUIVARIANT NEURAL NETWORKS
In this section we define co-attentive feature mappings and apply them in the context of equivariant neural networks (Figure 3). To this end, we introduce cyclic equivariant self-attention and utilize it to construct co-attentive rotation equivariant neural networks. Subsequently, we show that cyclic equivariant self-attention is extendable to larger symmetry groups and make use of this fact to construct co-attentive neural networks equivariant to rotations and mirror reflections.
2Formally it is as a correlation. However, we hold on to the standard deep learning terminology. 3The reader may easily verify that Θ (and hence Z2 o Θ, with (o) the semi-direct product) forms a group.
3.1 CO-ATTENTIVE ROTATION EQUIVARIANT NEURAL NETWORKS
To allow rotation equivariant networks to utilize and learn co-attentive equivariant representations, we introduce an attention operator A(l) on top of the roto-translational convolution f (l)R with which discernment along the rotation axis r of the generated feature responses x(l)(u, r, λ) is possible. Formally, our co-attentive rotation equivariant feature mapping f (l)R is defined as follows:
x(l+1) = f (l) R (x (l)) = A(l)(f (l)R (x (l))) = A(l) ( [x(l) ?R2oΘ W (l) λ′,λ]) (4)
Theoretically, A(l) could be defined globally over f (l)R (x(l)) (i.e., simultaneously along u, r, λ) as depicted in Eq. 4. However, we apply attention locally to: (1) grant the algorithm enough flexibility to attend locally to the co-occurrence envelope of feature representations and, (2) utilize attention exclusively along the rotation axis r, such that our contributions are clearly separated from those possibly emerging from spatial attention. To this end, we apply attention pixel-wise on top of f
(l) R (x (l)) (Eq. 5). Furthermore, we assign a single attention instance A(l)λ to each learned feature representation and utilize it across the spatial dimension of the output feature maps4:
x(l+1)(u, r, λ) = A(l)λ ({x (l+1)(u, r̂, λ)}rmaxr̂=1)(r) (5)
Attention and self-attention. Consider a source vector x = (x1, ..., xn) and a target vector y = (y1, ..., ym). In general, an attention operator A leverages information from the source vector x (or multiple feature mappings thereof) to estimate an attention matrix A ∈ [0, 1]n×m, such that: (1) the elementAi,j provides an importance assessment of the source element xi with reference to the target element yj and (2) the sum of importance over all xi is equal to one: ∑ iAi,j = 1. Subsequently, the matrix A is utilized to modulate the original source vector x as to attend to a subset of relevant source positions with regard to yj : x̃j = (A:,j)T x (where is the Hadamard product). A special case of attention is that of self-attention (Cheng et al., 2016), in which the target and the source vectors are equal (y := x). In other words, the attention mechanism estimates the influence of the sequence x on the element xj for its weighting.
4For a more meticulous discussion on how Eq. 5 attains co-occurrent attention, see Appendix A.
In general, the attention matrix5 A ∈ [0, 1]n×m is constructed via nonlinear space transformations fà : Rn → Rn×m of the source vector x, on top of which the softmax function is applied: A:,j = softmax(fÃ(x):,j). This ensures that the properties previously mentioned hold. Typically, the mappings fà found in literature take feature transformation pairs of x as input (e.g., {s,H} in RNNs (Luong et al., 2015), {Q,K} in self-attention networks (Vaswani et al., 2017)), and perform (non)-linear mappings on top of it, ranging from multiple feed-forward layers (Bahdanau et al., 2014) to several operations between the transformed pairs (Luong et al., 2015; Vaswani et al., 2017; Mishra et al., 2017; Zhang et al., 2018). Due to the computational complexity of these approaches and the fact that we do extensive pixel-wise usage of fà on every network layer, their direct integration in our framework is computationally prohibitive. To circumvent this problem, we modify the usual self-attention formulation as to enhance its descriptive power in a much more compact setting.
Compact local self-attention. Initially, we relax the range of values of A from [0, 1]n×n to Rn×n. This allows us to encode much richer relationships between element pairs (xi, xj) at the cost of less interpretability. Subsequently, we define A = xT Ã, where à ∈ Rn×n is a matrix of learnable parameters. Furthermore, instead of directly applying softmax on the columns of A, we first sum over the contributions of each element xi to obtain a vector a = { ∑ iAi,j}nj=1, which is then passed to the softmax function. Following Vaswani et al. (2017), we prevent the softmax function from reaching regions of low gradient by scaling its argument by ( √ dim(A))−1 = (1/n): ã = softmax((1 / n) a). Lastly, we counteract the contractive behaviour of the softmax function by normalizing ã before weighting x as to preserve the magnitude range of its argument. This allows us to use A in deep architectures. Our compact self-attention mechanism is summarized as follows:
a = { ∑ iAi,j} n j=1 = ∑ i(x
T Ã)i,j = xà (6) ã = softmax((1 / n) a) (7)
x̂ = A(x) = (ã /max(ã)) x (8) The cyclic equivariant self-attention operator AC . Consider {x(u, r, λ)}rmaxr=1, the vector of responses generated by a roto-translational convolution fR stacked along the rotation axis r. By applying self-attention along r, we are able to generate an importance matrix A ∈ Rrmax×rmax relating all pairs of (θi, θj)-rotated responses in the rotational group Θ at a certain position. We refer to this attention mechanism as full self-attention (AF ). Although AF is able to encode arbitrary linear source-target relationships for each target position, it is not restricted to conserve equivariance to Θ. Resultantly, we risk incurring into the behavior outlined in Fig. 2c. Before we further elaborate on this issue, we introduce the cyclic permutation operator Pi, which induces a cyclic shift of i positions on its argument: σP i
(xj) = x(j+i)mod(dim(x)), ∀xj ∈ x.
Consider a full self-attention operator AF acting on top of a roto-translational convolution fR. Let p be an input pattern to which fR only produces a strong activation in the feature map x(r̂) = fR(p)(r̂), r̂ ∈ {r}rmaxr=1. Intuitively, during learning, only the corresponding attention coefficients Ã;,r̂ in AF would be significantly increased. Now, consider the presence of the input pattern θip, a θi-rotated variant of p. By virtue of the rotational equivariance property of the feature mapping fR, we obtain (locally) an exactly equal response to that of p up to a cyclic permutation of i positions on r, and thus, we obtain a strong activation in the feature map Pi(x(r̂)) = x(σPi(r̂)). We encounter two problems in this setting: AF is not be able to detect that p and θip correspond to the exact same input pattern and, as each but the attention coefficients Ã:,j is small, the network might considerably damp the response generated by θip. As a result, the network might (1) squander important feedback information during learning and (2) induce learning of repeated versions of the same pattern for different orientations. In other words, AF does not behave equivariantly as a function of θi. Interestingly, we are able to introduce prior-knowledge into the attention model by restricting the structure of Ã. By leveraging the idea of equivariance to the cyclic group Cn, we are able to solve the problems exhibited byAF and simultaneously reduce the number of additional parameters required by the self-attention mechanism (from r2max to rmax). Consider again the input patterns p and θip. We incorporate the intuition that p and θip are one and the same entity, and thus, fR (locally) generates the same output feature map up to a cyclic permutation Pi: fR(θip) = Pi(fR(p)). Consequently, the attention mechanism should produce the exact same output for both p and θip up to the same cyclic permutation Pi. In other words,A (and thus Ã) should be equivariant to cyclic permutations.
5Technically, each column of A is restricted to a simplex and hence A lives in a subspace of [0, 1]n×m.
A well-known fact in mathematics is that a matrix A is equivariant with respect to cyclic permutations of the domain if and only if it is circulant (Alkarni, 2001; Åhlander & Munthe-Kaas, 2005). We make use of this certitude and leverage the concept of circulant matrices to impose cyclic equivariance to the structure of Ã. Formally, a circulant matrix C ∈ Rn×n is composed of n cyclic permutations of its defining vector c = {ci}ni=1, such that its j-th column is a cyclic permutation of j − 1 positions of c: C:,j = Pj−1(c)T . We construct our cyclic equivariant self-attention operator AC by defining à as a circulant matrix specified by a learnable attention vector aC = {aCi } rmax i=1:
à = {Pj−1(aC)T }nj=1 (9)
and subsequently applying Eqs. 6 - 8. Resultantly, AC is able to assign the responses generated by fR for rotated versions of an input pattern p to a unique entity: fR(θip) = Pi(fR(p)), and dynamically adjust its output to the angle of appearance θi, such that the attention operation does not disrupt its propagation downstream the network: AC(fR(θip)) = Pi(AC(fR(p))). Consequently, the attention weights aC are updated equally regardless of specific values of θi. Due to these properties, AC does not incur in any of the problems outlined earlier in this section. Conclusively, our co-attentive rotation equivariant feature mapping f (l)R is defined as follows:
x(l+1)(u, r, λ) = f (l) R (x (l))(u, r, λ) = AC(l)λ ( [x(l) ?R2oΘ W (l) λ′,λ])(u, r, λ) (10)
Note that a co-attentive equivariant feature mapping fR is approximately equal (up to a normalized softmax operation (Eq. 8)) to a conventional equivariant one fR, if à = αI for any α ∈ R.
3.2 EXTENDING AC TO MULTIPLE SYMMETRY GROUPS
The self-attention mechanisms outlined in the previous section are easily extendable to larger groups consisting of multiple symmetries. Consider, for instance, the group θrm of rotations by θr degrees and mirror reflections m defined analogously to the group p4m in Cohen & Welling (2016). Let p(u, r,m, λ) be an input signal with an affixed index m ∈ {m0,m1} for mirror reflections (m1 indicates mirrored) and fθrm be a group convolution (Cohen & Welling, 2016) on the θrm group. The group convolution fθrm produces two times as many output channels (2rmax : m0rmax+m1rmax) as those generated by the roto-translational convolution fR (Eq. 3, Fig. 3).
Full self-attention AF can be integrated directly by modulating the output of fθrm as depicted in Sec. 3.1 with à ∈ R2rmax×2rmax . Here, AF relates each of the group convolution responses with one another. However, just as for fR, AF disrupts the equivariance property of fθrm to the θrm group. Similarly, the cyclic equivariant self-attention operator AC can be extended to multiple symmetry groups as well. Before we continue, we introduce the cyclic permutation operator Pi,t, which induces a cyclic shift of i positions on its argument along the transformation axis t. Consider the input patterns p and θip outlined in the previous section and mp, a mirrored instance of p. Let x(u, r,m, λ) = fθrm(p)(u, r,m, λ) be the response of the group convolution fθrm for the input pattern p. By virtue of the rotation equivariance property of fθrm, the generated response for θip is equivalent to that of p up to a cyclic permutation of i positions along the rotation axis r: fθrm(θip)(u, r,m, λ) = Pi,r(fθrm(p))(u, r,m, λ) = x(u, σP i
(r),m, λ). Similarly, by virtue of the mirror equivariance property of fθrm, the response generated by mp is equivalent to that of p up to a cyclic permutation of one position along the mirroring axis m: fθrm(mp)(u, r,m, λ) = P1,m(fθrm(p))(u, r,m, λ) = x(u, r, σP 1
(m), λ). Note that if we take two elements from a group g, h, their composition (gh) is also an element of the group. Resultantly, fθrm((mθ
i)p)(u, r,m, λ) = (P1,m ◦ Pi,r)(fθrm(p))(u, r,m, λ) = P1,m(Pi,r(x))(u, r,m, λ) = P1,m(x)(u, σPi(r),m, λ) = x(u, σPi(r), σP1(m), λ). In other words, in order to extend AC to the θrm group, it is necessary to restrict the structure of à such that it respects the permutation laws imposed by the equivariant mapping fθrm. Let us rewrite x(u, r,m, λ) as x(u, g, λ), g = (mr) ∈ {m0,m1} × {r̂}rmaxr̂=1. In this case, we must impose a circulant block matrix structure on à such that: (1) the composing blocks permute internally as defined by Pi,r and (2) the blocks themselves permute with one another as defined by P1,m. Formally, à is defined as:
à = [ Ã1 Ã2 Ã2 Ã1 ] (11)
where {Ãi ∈ Rrmax×rmax}, i ∈ {1, 2} are circulant matrices (Eq. 9). Importantly, the ordering of the permutation laws in à is interchangeable if the input vector is modified accordingly, i.e., g = (rm).
Conclusively, cyclic equivariant self-attention AC is directly extendable to act on any G-equivariant feature mapping fG, and for any symmetry group G, if the group actions TYg produce cyclic permutations on the codomain of fG. To this end, one must restrict the structure of à to that of a circulant block matrix, such that all permutation laws of TYg hold: T Y g (AC(fG)) = AC(TYg (fG)), ∀g ∈ G.
4 EXPERIMENTS
Experimental Setup. We validate our approach by exploring the effects of co-attentive equivariant feature mappings for single and multiple symmetry groups on existing equivariant architectures. Specifically, we replace conventional rotation equivariant mappings in p4-CNNs (Cohen & Welling, 2016) and DRENs (Li et al., 2018) with co-attentive equivariant ones and evaluate their effects in fully (rotated MNIST) and partially (CIFAR-10) rotational settings. Similarly, we evaluate coattentive equivariant maps acting on multiple similarity groups by replacing equivariant mappings in p4m-CNNs (Cohen & Welling, 2016) (equivariant to rotation and mirror reflections) likewise. Unless otherwise specified, we replicate as close as possible the same data processing, initialization strategies, hyperparameter values and evaluation strategies utilized by the baselines in our experiments. Note that the goal of this paper is to study and evaluate the relative effects obtained by co-attentive equivariant networks with regard to their conventional counterparts. Accordingly, we do not perform any additional tuning relative to the baselines. We believe that improvements on our reported results are feasible by performing further parameter tuning (e.g., on the network structure or the used hyperparameters) on the proposed co-attentive equivariant networks.
The additional learnable parameters, i.e., those associated to the cyclic self-attention operator (Ã) are initialized identically to the rest of the layer. Subsequently, we replace the values of à along the diagonal by 1 (i.e., diag(Ãinit) = 1) such that Ãinit approximately resembles the identity I and, hence, co-attentive equivariant layers are initially approximately equal to equivariant ones.
Rotated MNIST. The rotated MNIST dataset (Larochelle et al., 2007) contains 62000 gray-scale 28x28 handwritten digits uniformly rotated on the entire circle [0, 2π). The dataset is split into training, validation and tests sets of 10000, 2000 and 50000 samples, respectively. We replace rotation equivariant layers in p4-CNN (Cohen & Welling, 2016), DREN and DRENMaxPooling (Li et al., 2018) with co-attentive ones. Our results show that co-attentive equivariant networks consistently outperform conventional ones (see Table 1).
CIFAR-10. The CIFAR-10 dataset (Krizhevsky et al., 2009) consists of 60000 real-world 32x32 RGB images uniformly drawn from 10 classes. Contrarily to the rotated MNIST dataset, this dataset does not exhibit rotation symmetry. The dataset is split into training, validation and tests sets of 40000, 10000 and 10000 samples, respectively. We replace equivariant layers in the p4 and p4m variations of the All-CNN (Springenberg et al., 2014) and the ResNet44 (He et al., 2016) proposed by Cohen & Welling (2016) with co-attentive ones. Likewise, we modify the r x4-variations of the NIN (Lin et al., 2013) and ResNet20 (He et al., 2016) models proposed by Li et al. (2018) in the same manner. Our results show that co-attentive equivariant networks consistently outperform conventional ones in this setting as well (see Table 1).
Training convergence of equivariant networks. Li et al. (2018) reported that adding too many rotational equivariant (isotonic) layers decreased the performance of their models on CIFAR-10. As a consequence, they did not report results on fully rotational equivariant networks for this setting and attributed this behaviour to the non-symmetricity of the data. We noticed that, with equal initialization strategies, rotational equivariant CNNs were much more prone to divergence than ordinary CNNs. This behaviour can be traced back to the additional feedback resulting from roto-translational convolutions (Eq. 3) compared to ordinary ones (Eq. 2). After further analysis, we noticed that the data preprocessing strategy utilized by Li et al. (2018) leaves some very large outlier values in the data (|x| >100), which strongly contribute to the behaviour outlined before. In order to evaluate the relative contribution of co-attentive equivariant neural networks we constructed fully equivariant DREN architectures based on their implementation. Although the obtained results were much worse than those originally reported in Li et al. (2018), we were able to stabilize
training by clipping input values to the 99 percentile of the data (|x| ≤2.3) and reducing the learning rate to 0.01, such that the same hyperparameters could be used across all network types. The obtained results (see Table 1) signalize that DREN networks are comparatively better than CNNs both in fully and partially rotational settings, contradictorily to the conclusions drawn in Li et al. (2018).
This behaviour elucidates that although the inclusion of equivariance to larger transformation groups is beneficial both in terms of accuracy and parameter efficiency, one must be aware that such benefits are directly associated with an increase of the network susceptibility to divergence during training.
5 DISCUSSION AND FUTURE WORK
Our results show that co-attentive equivariant feature mappings can be utilized to enhance conventional equivariant ones. Interestingly, co-attentive equivariant mappings are beneficial both in partially and fully rotational settings. We attribute this to the fact that a set of co-occurring orientations between patterns can be easily defined (and exploited) in both settings. It is important to note that we utilized attention independently over each spatial position u on the codomain of the corresponding group convolution. Resultantly, we were restricted to mappings of the form xA, which, in turn, constraint our attention mechanism to have a circulant structure in order to preserve equivariance (since group actions acting in the codomain of the group convolution involve cyclic permutations and cyclic self-attention is applied in the codomain of the group convolution).
In future work, we want to extend the idea presented here to act on the entire group simultaneously (i.e., along u as well). By doing so, we lift our current restriction to mappings of the form xA and therefore, may be able to develop attention instances with enhanced descriptive power. Following the same line of though, we want to explore incorporating attention in the convolution operation itself. Resultantly, one is not restricted to act exclusively on the codomain of the convolution, but instead, is able to impose structure in the domain of the mapping as well. Naturally, such an approach could lead to enhanced descriptiveness of the incorporated attention mechanism. Moreover, we want to utilize and extend more complex attention strategies (e.g., Bahdanau et al. (2014); Luong et al. (2015); Vaswani et al. (2017); Mishra et al. (2017)) such that they can be applied to large transformation groups without disrupting equivariance. As outlined earlier in Section 3.1, this becomes very challenging from a computational perspective as well, as it requires extensive usage of the corresponding attention mechanism. Resultantly, an efficient implementation thereof is mandatory. Furthermore, we want to extend co-attentive equivariant feature mappings to continuous (e.g., Worrall et al. (2017)) and 3D space (e.g., Cohen et al. (2018); Worrall & Brostow (2018); Cohen et al. (2019)) groups, and for applications other than visual data (e.g., speech recognition).
Finally, we believe that our approach could be refined and extended to a first step towards dealing with the enumeration problem of large groups (Gens & Domingos, 2014), such that functions acting on the group (e.g., group convolution) are approximated by evaluating them on the set of cooccurring transformations as opposed to on the entire group. Such approximations are expected to be very accurate, as non-co-occurrent transformations are rare. This could be though of as sharping up co-occurrent attention to co-occurrent restriction.
6 CONCLUSION
We have introduced the concept of co-attentive equivariant feature mapping and applied it in the context of equivariant neural networks. By attending to the co-occurrence envelope of the data, we are able to improve the performance of conventional equivariant ones on fully (rotated MNIST) and partially (CIFAR-10) rotational settings. We developed cyclic equivariant self-attention, an attention mechanism able to attend to the co-occurrence envelope of the data without disrupting equivariance to a large set of transformation groups (i.e., all transformation groups G, whose action in the codomain of a G-equivariant feature mapping produce cyclic permutations). Our obtained results support the proposed co-occurrence envelope hypothesis.
ACKNOWLEDGMENTS
We gratefully acknowledge Jan Klein, Emile van Krieken, Jakub Tomczak and our anonymous reviewers for their helpful and valuable commentaries. This work is part of the Efficient Deep Learning (EDL) programme (grant number P16-25), which is partly founded by the Dutch Research Council (NWO) and Semiotic Labs.
A OBTAINING CO-OCCURRENT ATTENTION VIA EQUATION 5
In this section, we provide a meticulous description on how co-occurrent attention is obtained via the method presented in the paper. Intuitively, a direct approach to address the problem illustrated in the introduction (Section 1) and Figure 2 requires an attention mechanism that acts simultaneously on r and λ (see Eq. 3). However, we illustrate how the depicted problem can be simplified such that attention along r is sufficient by taking advantage of the equivariance property of the network.
Let p be the input of a roto-translational convolution fR : Z2×Θ×Λ0 → Z2×Θ×Λ1 as defined in Eq. 3, and Θ be the set of rotations by θr degrees: Θ = {θr = r 2πrmax } rmax r=1. Let fR(p)(u) ∈ Rrmax×Λ1 be the matrix consisting of the rmax oriented responses for each λ ∈ Λ1 learned representation at a certain position u. Since the vectors fR(p)(u, λ) ∈ Rrmax , λ ∈ Λ1 permute cyclically as a result of the rotation equviariance property of fR, it is mandatory to ensure equivariance to cyclic permutations for each fR(p)(u, λ) during the course of the attention procedure (see Section 3).
At first sight, one is inclined to think that there is no connection between multiple vectors fR(p)(u, λ) in fR(p)(u), and, therefore, in order to exploit co-occurences, one must impose additional constraints along the λ axis. However, there is indeed an implicit restriction in fR(p)(u) along λ resulting from the rotation equivariance property of the mapping fR, which we can take advantage from to simplify the problem at hand. Consider, for instance, the input θip, a θi-rotated version of p. By virtue of the equivariance property of fR, we have (locally) that fR(θip) = Pi(fR(p)). Furthermore, we know that this property must hold for all the learned feature representations fR(p)(u, λ),∀λ ∈ Λ1. Resultantly, we have that:
fR(θip)(u, r, λ) = Pi(fR(p)(u, r, λ)) , ∀λ ∈ Λ1 (12) In other words, if one of the learned mappings fR(p)(u, r, λ) experiences a permutation Pi along r, all the learned representations fR(p)(u, r, λ), ∀λ ∈ Λ1 must experience the exact same permutation Pi as well. Resultantly, the equivariance property of the mapping fR ensures that all the Λ1 learned feature representations fR(p)(u, λ) “move synchronously” as a function of input rotation θi.
Likewise, if we apply a cyclic equivariant attention mechanism ACλ independently on top of each λ learned representation fR(p)(u, λ), we obtain that the relation
ACλ(fR(θip))(u, r, λ) = Pi(ACλ(fR(p))(u, r, λ)) , ∀λ ∈ Λ1 (13) must hold as well. Similarly to the case illustrated in Eq. 12 and given that ACλ is equivariant to cyclic permutations on the domain, we obtain that all the Λ1 learned attention masks ACλ “move synchronously” as a function of input rotation θi as well (see Fig. 4).
From Eq. 13 and Figure 4, one can clearly see that by utilizingACλ independently along r and taking advantage from the fact that all Λ1 learned feature representations are tied with one another via fR, one is able to prioritize learning of feature representations that co-occur together as opposed to the much looser formulation in Eq. 12, where feedback is obtained from all orientations. | 1. What is the focus of the paper regarding attention in equivariant image classification CNNs?
2. What are the strengths of the proposed approach, particularly in its implementation and evaluation?
3. What are the weaknesses of the paper, especially regarding its notation, mathematicization, and length?
4. How can the ideas presented in the paper be made clearer and more concise?
5. What is the main additional hyperparameter in the algorithm, and how was it determined?
6. What is the significance of the approach's applicability to various transformations beyond rotation and mirroring? | Review | Review
This paper describes an approach to applying attention in equivariant image classification CNNs so that the same transformation (rotation+mirroring) is selected for each kernel. For example, if the image is of an upright face, the upright eyes will be selected along with the upright nose, as opposed to allowing the rotation of each to be independent. Applying this approach to several different models on rotated MNIST and CIFAR-10 lead to smaller test errors in all cases.
Overall, this is a good idea that appears to be well implemented and well evaluated. It includes an extensive and detailed bibliography of relevant work. The approach seems to be widely applicable. It could be applied to any deep learning-based image classification system. It can be applied to additional transformations beyond rotation and mirroring.
The one shortcoming of the paper is that it takes a simple idea and makes it somewhat difficult to follow through cumbersome notation and over-mathmaticization. The ideas presented would be much clearer as an algorithm or more code-like representation as opposed to as equations. Even verbal descriptions could suffice. The paper is also relatively long, going onto the 10th page. In order to save space, some of the mathematical exposition can be condensed.
In addition, as another issue with clarity, the algorithm has one main additional hyperparameter, r_max, but the description of the experiments does not appear to mention the value of this hyperparameter. It also states that the rotated MNIST dataset is rotated on the entire circle, but not how many fractions of the circle are allowed, which is equivalent to r_max. |
ICLR | Title
Inter-BMV: Interpolation with Block Motion Vectors for Fast Semantic Segmentation on Video
Abstract
Models optimized for accuracy on single images are often prohibitively slow to run on each frame in a video. Recent work exploits the use of optical flow to warp image features forward from select keyframes, as a means to conserve computation on video. This approach, however, achieves only limited speedup, even when optimized, due to the accuracy degradation introduced by repeated forward warping, and the inference cost of optical flow estimation. To address these problems, we propose a new scheme that propagates features using the block motion vectors (BMV) present in compressed video (e.g. H.264 codecs), instead of optical flow, and bi-directionally warps and fuses features from enclosing keyframes to capture scene context on each video frame. Our technique, interpolation-BMV, enables us to accurately estimate the features of intermediate frames, while keeping inference costs low. We evaluate our system on the CamVid and Cityscapes datasets, comparing to both a strong single-frame baseline and related work. We find that we are able to substantially accelerate segmentation on video, achieving near real-time frame rates (20+ frames per second) on large images (e.g. 960×720 pixels), while maintaining competitive accuracy. This represents an improvement of almost 6× over the single-frame baseline and 2.5× over the fastest prior work.
1 INTRODUCTION
Semantic segmentation, the task of assigning each pixel in an image to a semantic object class, is a problem of long-standing interest in computer vision. Like models for other image recognition tasks (e.g. classification, detection, instance segmentation), semantic segmentation networks have grown drastically in both layer depth and parameter count in recent years, in the race to segment more complex images, from larger, more realistic datasets, at higher accuracy. As a result, state-of-the-art segmentation networks today require between 0.5 to 3.0 seconds to segment a single, high-resolution image (e.g. 2048× 1024 pixels) at competitive accuracy (Zhu et al. (2017); Gadde et al. (2017)). Meanwhile, a new target data format for segmentation has emerged: video. The motivating use cases include both batch applications, where video is segmented in bulk to generate training data for other models (e.g. autonomous control systems), and streaming settings, where high-throughput video segmentation enables interactive analysis of live footage (e.g. at surveillance sites). Video in these contexts consists of long image sequences, shot at high frame rates (e.g. 30 fps) in complex environments (e.g. urban cityscapes) on modern, high-definition cameras. Segmenting individual frames at high accuracy still calls for the use of competitive image models, but their inference cost precludes their naı̈ve deployment on every frame in a raw multi-hour video stream.
A defining characteristic of realistic video is its high level of temporal continuity. Consecutive frames demonstrate significant spatial similarity, which suggests the potential to reuse computation across frames. Building on prior work, we exploit two observations: 1) higher-level features evolve more slowly than raw pixel content in video, and 2) feature computation tends to be much more expensive than task-specific computation across a range of vision tasks (e.g. detection, segmentation) (Shelhamer et al. (2016); Zhu et al. (2017)). Accordingly, we divide our semantic segmentation model into a deep feature network and a cheap, shallow task network (Zhu et al. (2017)). We compute features only on designated keyframes, and propagate them to intermediate frames, by warping the feature maps with frame-to-frame motion estimates. The task network is executed on all frames.
Nfeat
Ntask Ntask
Nfeat
Ntask Ntask Ntask
F F F
keyframe k keyframe k+nintermediate frames
… …
W W W
WWW
block motion vectors
… …
Given that feature warping and task computation is much cheaper than feature extraction, a key parameter we aim to optimize is the interval between designated keyframes.
Here we make two key contributions. First, noting the high level of data redundancy in video, we successfully utilize an artifact of compressed video, block motion vectors (BMV), to cheaply propagate features from frame to frame. Unlike other motion estimation techniques, which require specialized convolutional networks, block motion vectors are freely available in modern video formats, making for a simple, fast design. Second, we propose a novel feature estimation technique that enables the features for a large fraction of video frames to be inferred accurately and efficiently (see Fig. 1). In particular, when computing the segmentation for a keyframe, we also precompute the features for the next designated keyframe. Features for all subsequent intermediate frames are then computed as a fusion of features warped forward from the last visited keyframe, and features warped backward from the incoming keyframe. This procedure implements an interpolation of the features of the two closest keyframes. We then combine the two ideas, using block motion vectors to perform the feature warping in feature interpolation. The result is a scheme we call interpolation-BMV.
We evaluate our framework on the CamVid and Cityscapes datasets. Our baseline consists of running a competitive segmentation network, DeepLab (Chen et al. (2017)), on every frame, a setup that achieves published accuracy (Dai et al. (2017)), and throughput of 3.6 frames per second (fps) on CamVid and 1.3 fps on Cityscapes. Our improvements come in two phases. First, our use of motion vectors for feature propagation allow us to cut inference time on intermediate frames by 53%, compared to approaches based on optical-flow, such as Zhu et al. (2017). Second, our bi-directional feature warping and fusion scheme achieves substantial accuracy improvements, especially at high keyframe intervals. Together, the two techniques allow us to operate at over twice the average inference speed as the fastest prior work, at any target level of accuracy. For example, if we are willing to tolerate no worse than 65 mIoU on our CamVid video stream, we are able to operate at a throughput of 20.1 fps, compared to the 8.0 fps achieved by the forward flow-based propagation from Zhu et al. (2017). Overall, even when operating in high accuracy regimes (e.g. within 3% mIoU of the baseline), we are able to accelerate segmentation on video by a factor of 2-6×.
2 RELATED WORK
2.1 IMAGE SEMANTIC SEGMENTATION
Semantic segmentation is a classical image recognition task in computer vision, originally studied in the context of statistical inference. The approach of choice was to propagate evidence about pixel class assignments through a probabilistic graphical model (Felzenszwalb & Huttenlocher (2004); Shotton et al. (2009)), a technique that scaled poorly to large images with numerous object classes (Krähenbühl & Koltun (2011)). In 2014, Long et al. (2015) proposed the use of fully convolutional neural networks (FCNs) to segment images, demonstrating significant accuracy gains on several key datasets. Subsequent work embraced the FCN architecture, proposing augmentations such as
dilated (atrous) convolutions (Yu & Koltun (2016)), post-processing CRFs (Chen et al. (2016)), and pyramid spatial pooling (Zhao et al. (2017)) to further improve accuracy on large, complex images.
2.2 EFFICIENT VIDEO SEMANTIC SEGMENTATION
The recent rise of applications such as autonomous driving, industrial robotics, and automated video surveillance, where agents must perceive and understand the visual world as it evolves, has triggered substantial interest in the problem of efficient video semantic segmentation. Shelhamer et al. (2016) and Zhu et al. (2017) proposed basic feature reuse and optical flow-based feature warping, respectively, to reduce the inference cost of running expensive image segmentation models on video. Recent work explores adaptive feature propagation, partial feature updating, and adaptive keyframe selection as techniques to further optimize the scheduling and execution of optical-flow based warping (Zhu et al. (2018); Li et al. (2018); Xu et al. (2018)). In general, these techniques fall short in two respects: (1) optical flow computation remains a computational bottleneck, especially as other network components become cheaper, and (2) forward feature propagation fails to account for other forms of temporal change, besides spatial displacement, such as new scene content (e.g. new objects), perspective changes (e.g. camera pans), and observer movement (e.g. in driving footage). As a result, full frame features must still be recomputed frequently to maintain accuracy, especially in video footage with complex dynamics, fundamentally limiting the attainable speedup.
2.3 MOTION AND COMPRESSED VIDEO
Wu et al. (2018) train a network directly on compressed video to improve both accuracy and performance on video action recognition. Zhang et al. (2016) replace the optical flow network in the classical two-stream architecture (Simonyan & Zisserman (2014)) with a “motion vector CNN”, but encounter accuracy challenges, which they address with various transfer learning schemes. Unlike these works, our main focus is not efficient training, nor reducing the physical size of input data to strengthen the underlying signal for video-level tasks, such as action recognition. We instead focus on a class of dense prediction tasks, notably semantic segmentation, that involve high-dimensional output (e.g. a class prediction for every pixel in an image) generated on the original uncompressed frames of a video. This means that we must still process each frame in isolation. To the best of our knowledge, we are the first to propose the use of compressed video artifacts to warp deep neural representations, with the goal of drastically improved inference throughput on realistic video.
3 SYSTEM OVERVIEW
3.1 NETWORK ARCHITECTURE
We follow the common practice of adapting a competitive image classification model (e.g. ResNet101) into a fully convolutional network trained on the semantic segmentation task (Long et al. (2015); Yu et al. (2017); Chen et al. (2017)). We identify two logical components in our final model: a feature network, which takes as input an image i ∈ R1×3×h×w and outputs a representation fi ∈ R1×A× h 16× w 16 , and a task network, which given the representation, computes class predictions for each pixel in the image, pi ∈ R1×C×h×w. The task network Ntask is built by concatenating three blocks: (1) a feature projection block, which reduces the feature channel dimensionality to A 2 , (2) a scoring block, which predicts scores for each of the C segmentation classes, and (3) an upsampling block, which bilinearly upsamples the score maps to the resolution of the input image.
3.2 BLOCK MOTION VECTORS
MPEG-compressed video consists of two logical components: reference frames, called I-frames, and delta frames, called P-frames. Reference frames are still RGB frames from the video, usually represented as spatially-compressed JPEG images. Delta frames, which introduce temporal compression to video, consist of two subcomponents: block motion vectors and residuals.
Block motion vectors, the artifact of interest in our current work, define a correspondence between pixels in the current frame and pixels in the previous frame. They are generated using block motion compensation, a standard procedure in video compression algorithms (Richardson (2008)):
1. Divide the current frame into a non-overlapping grid of 16x16 pixel blocks.
2. For each block in the current frame, determine the “best matching” block in the previous frame. A common matching metric is to minimize mean squared error between the blocks.
3. For each block in the current frame, represent the pixel offset to the best matching block in the previous frame as an (x, y) coordinate pair, or motion vector.
The resulting grid of (x, y) offsets forms the block motion vector map for the current frame. For a 16M × 16N frame, this map has dimensions M ×N . The residuals then consist of the pixel-level difference between the current frame, and the previous frame transformed by the motion vectors.
3.3 FEATURE PROPAGATION
Many cameras compress video by default as a means for efficient storage and transmission. The availability of a free form of motion estimation at inference time, the motion vector maps in MPEGcompressed video, suggests the following scheme for fast video segmentation (see Algorithm 1).
Algorithm 1 Feature propagation with block motion vectors (prop-BMV) 1: input: video frames {Ii}, motion vectors mv, keyframe interval n 2: for frame Ii in {Ii} do 3: if i mod n = 0 then . keyframe 4: fi ← Nfeat(Ii) . keyframe features 5: Si ← Ntask(fi) 6: else . intermediate frame 7: fi ← WARP(fc,−mv[i])) . warp cached features 8: Si ← Ntask(fi) 9: end if 10: fc ← fi . cache features 11: end for 12: output: frame segmentations {Si}
Choose a keyframe interval n. On keyframes (every nth frame), execute the feature network Nfeat to obtain a feature map. Cache these computed features, fc, and then execute the task network Ntask to obtain the keyframe segmentation. On intermediate frames, extract the motion vectors mv[i] corresponding to the current frame index. Warp the cached features fc one frame forward via bilinear interpolation with −mv[i]. (To warp forward, we apply the negation of the vector map.) Here we employ the differentiable, parameter-free spatial warping operator proposed by Jaderberg et al. (2015). Finally, execute Ntask on the warped features to obtain the current segmentation.
3.3.1 INFERENCE RUNTIME ANALYSIS
Feature propagation is effective because it relegates feature extraction, the most expensive network component, to select keyframes. Of the three remaining operations performed on intermediate frames – motion estimation, feature warping, and task execution – motion estimation with optical flow is the most expensive (see Fig. 2). By using block motion, we eliminate this remaining bottleneck, accelerating inference times on intermediate frames for a DeepLab segmentation network (Chen et al. (2017)) from 116 ms per frame (F +W +Ntask) to 54 ms per frame (W +Ntask). For keyframe interval n, this translates to a speedup of 53% on n−1n of the video frames.
Note that for a given keyframe interval n, as we reduce inference time on intermediate frames to zero, we approach a maximum attainable speedup factor of n over a frame-by-frame baseline that runs the full model on every frame. Exceeding this bound, without compromising on accuracy, requires an entirely new approach to feature estimation, the subject of the next section.
Incidentally, we also benchmarked the time required to extract block motion vectors from raw video (i.e. H.264 compression time), and found that ffmpeg takes 2.78 seconds to compress 1,000 Cityscapes video frames, or 2.78 ms per frame. In contrast, optical flow computation on a frame pair takes 62 ms (Fig. 2). We include this comparison for completeness: since compression is a default behavior on modern cameras, block motion extraction is not a true component of inference time.
3.4 FEATURE INTERPOLATION
Given an input video stream, our goal is to compute the segmentation of every frame as efficiently as possible, while preserving accuracy. In a batch setting, we have access to the entire video, and desire the segmentations for all the frames, as input to another model (e.g. an autonomous control system). In a streaming setting, we have access to frames as they come in, but may be willing to tolerate a small delay of keyframe interval n frames ( n30 seconds at 30 fps) before we output a segmentation, if that means we can match the throughput of the video stream and maintain high accuracy.
We make two observations. First, all intermediate frames in a video by definition lie between two designated keyframes, which represent bounds on the current scene. New objects that are missed in forward feature propagation schemes are more likely to be captured if both past and incoming keyframes are used. Second, feature fusion techniques are effective at preserving strong signals in any one input feature map, as seen in Feichtenhofer et al. (2016). This suggests the viability of estimating the features of intermediate frames as the fusion of the features of enclosing keyframes.
Expanding on this idea, we propose the following algorithm (see Fig. 1). On any given keyframe, precompute the features for the next keyframe. On intermediate frames, warp the previous keyframe’s features, Nfeat(Ik), forward to the current frame Ii using incremental forward motion estimates, −mv[k : i]. Warp the next keyframe’s features, Nfeat(Ik+n), backward to the current frame using incremental backward motion estimates, mv[k+n : i]. Fuse the two feature maps using either a weighted average or learned fusion operator, F . Then execute the task network Ntask on the fused features. This forms Algorithm 2. A formal statement is included in Appendix: Sec. 6.1.
To eliminate redundant computation, on keyframes, we precompute forward and backward warped feature maps ff , f b corresponding to each subsequent intermediate frame, {Ik+1, ..., Ik+n−1}. For keyframe interval n, this amounts to n− 1 forward and n− 1 backward warped feature maps.
3.4.1 FEATURE FUSION
We consider several possible fusion operators: max fusion, average fusion, and convolutional fusion (Feichtenhofer et al. (2016)). We implement max and average fusion by aligning the input feature maps ff , f b ∈ R1×C×h×w along the channel dimension, and computing a max or average across each pixel in corresponding channels, a parameter-free operation. We implement conv fusion by stacking the input feature maps along the channel dimension [ff , f b]C = fs ∈ R1×2C×h×w, and applying a bank of learned, 1x1 conv filters to reduce the channel dimensionality by a factor of two.
Before applying the fusion operator, we weight the two input feature maps ff , f b by scalars α and 1 − α, respectively, that correspond to feature relevance, a scheme that works very effectively in practice. For keyframe interval n, and a frame at offsets p and n − p from the previous and next keyframes, respectively, we set α = n−pn and 1 − α = p n , thereby penalizing the input features warped farther from their keyframe. Thus, when p is small relative to n, we weight the previous keyframe’s features more heavily, and vice versa. In summary, the features for intermediate frame Ii are set to: fi = F (n−pn f f , pnf b), where p = i mod n. This scheme is reflected in Alg. 2.
4 EXPERIMENTS
4.0.1 DATASETS
We train and evaluate our system on CamVid (Brostow et al. (2009)) and Cityscapes (Cordts et al. (2016)), two popular, large-scale datasets for complex urban scene understanding. CamVid consists
of over 10 minutes of footage captured at 30 fps and 960 × 720 pixels. Cityscapes consists of 30- frame video snippets shot at 17 fps and 2048 × 1024 pixels. On CamVid, we adopt the standard train-test split of Sturgess et al. (2009). On Cityscapes, we train on the train split and evaluate on the val split, following the example of previous work (Yu et al. (2017); Chen et al. (2017); Zhu et al. (2017)). We use the standard mean intersection-over-union (mIoU) metric to evaluate segmentation accuracy, and measure throughput in frames per second (fps) to evaluate inference performance.
4.0.2 ARCHITECTURE
For our segmentation network, we adopt a variant of the DeepLab architecture called Deformable DeepLab (Dai et al. (2017)), which employs deformable convolutions in the last ResNet block (conv5) to achieve significantly higher accuracy at comparable inference cost to a standard DeepLab model. DeepLab (Chen et al. (2017)) is widely considered a state-of-the-art architecture for semantic segmentation, and a DeepLab implementation currently ranks first on the PASCAL VOC object segmentation challenge (Aytar (2018)). Our DeepLab model uses ResNet-101 as its feature network, which produces intermediate representations fi ∈ R1×2048× h 16× w 16 . The DeepLab task network outputs predictions pi ∈ R1×C×h×w, where C is 12 or 20 for CamVid and Cityscapes respectively.
4.0.3 TRAINING
To train our single-frame DeepLab model, we initialize with an ImageNet-trained ResNet-101 model, and learn task-specific weights on the CamVid and Cityscapes train sets. To train our video segmentation system, we sample at random a labeled image from the train set, and select a preceding and succeeding frame to serve as the previous and next keyframe, respectively. Since motion estimation with block motion vectors and feature warping are both parameter-free, feature propagation introduces no additional weights. Training feature interpolation with convolutional fusion, however, involves learning weights for the 1x1 conv fusion layer, which is applied to stacked feature maps, each with channel dimension 2048. For both schemes, we train with SGD on an AWS EC2 instance with 4 Tesla K80 GPUs for 50 epochs, starting with a learning rate of 10−3.
4.1 RESULTS
4.1.1 BASELINE
For our accuracy and performance baseline, we evaluate our full DeepLab model on every labeled frame in the CamVid and Cityscapes test splits. Our baseline achieves an accuracy of 68.6 mIoU on CamVid, at a throughput of 3.7 fps. On Cityscapes, the baseline model achieves 75.2 mIoU, matching published results for the DeepLab architecture we used (Dai et al. (2017)), at 1.3 fps.
4.1.2 PROPAGATION AND INTERPOLATION
In this section, we evaluate our two main contributions: 1) feature propagation with block motion vectors (prop-BMV), and 2) feature interpolation, our new feature estimation scheme, implemented with block motion vectors (inter-BMV). We compare to the closest available existing work on the problem, a feature propagation scheme based on optical flow (Zhu et al. (2017)) (prop-flow). We evaluate by comparing accuracy-runtime curves for the three approaches on CamVid and Cityscapes (see Fig. 3). These curves are generated by plotting accuracy against throughput at each keyframe interval in Appendix: Tables 4 and 5, which contain comprehensive results.
First, we note that block motion-based feature propagation (prop-BMV) outperforms optical flowbased propagation (prop-flow) at all but the lowest throughputs. While motion vectors are slightly less accurate than optical flow in general, by cutting inference times by 53% on intermediate frames (Sec. 3.3.1), prop-BMV enables operation at much lower keyframe intervals than optical flow to achieve the same inference rates. This results in a much more favorable accuracy-throughput curve.
Second, we find that our feature interpolation scheme (inter-BMV) strictly outperforms both feature propagation schemes. At every keyframe interval, inter-BMV is more accurate than prop-flow and prop-BMV; moreover, it operates at similar throughput to prop-BMV. This translates to a consistent advantage over prop-BMV, and an even larger advantage over prop-flow (see Fig. 3). On CamVid, inter-BMV actually registers a small accuracy gain over the baseline at keyframe intervals 2 and 3, utilizing multi-frame context to improve on the accuracy of the single-frame DeepLab model.
Metrics. We also distinguish between two metrics: the standard average accuracy, results for which are plotted in Fig. 3, and minimum accuracy, which is a measure of the lowest frame-level accuracy an approach entails, i.e. accuracy on frames farthest away from keyframes. Minimum accuracy is the appropriate metric to consider when we wish to segment a video as efficiently as possible, while ensuring that all frame segmentations meet some threshold level of accuracy. As an example, at an accuracy target of 65 mIoU, feature interpolation enables operation at 20.1 fps on CamVid (see Table 4). This is 2.5× faster than achievable inference speeds with feature propagation alone, using either optical flow (8.0 fps) or block motion vectors (9.3 fps). In general, feature interpolation achieves over twice the throughput as Zhu et al. (2017) on CamVid and Cityscapes, at any target accuracy. Minimum accuracy plots (Fig. 5) are included in the Appendix.
Baseline. We also compare to our frame-by-frame DeepLab baseline, which offers low throughput but high average accuracy. As Figures 3a and 3b indicate, even at average accuracies above 68 mIoU on CamVid and 70 mIoU on Cityscapes, figures competitive with contemporary single-frame models (see Table 1 and Table 2), feature interpolation offers speedups of 4.5× and 4.2×, respectively, over the baseline. Notably, at key interval 3, interpolation obtains a 2.5× speedup over the baseline on CamVid, at slightly higher than baseline accuracy (see Fig. 3a and Table 4).
Delay. Recall that feature interpolation introduces a delay of keyframe interval n frames, which corresponds to n30 seconds at 30 fps. For example, at n = 3, inter-BMV introduces a delay of 3 30
seconds, or 100 ms. To put this in context, prop-flow (Zhu et al. (2017)) takes 125 ms to segment a frame at key interval 3, and inter-BMV takes 110 ms. Thus, by lagging by less than 1 segmentation, we are able to segment 2.5× more frames per hour than the frame-by-frame model (9.1 fps vs. 3.6 fps). This is a suitable tradeoff in almost all batch settings (e.g. training data generation, post-hoc video analysis), and in many interactive applications (e.g. video anomaly detection, film editing).
Fig. 4 depicts a qualitative comparison of interpolation and prop-flow (Zhu et al. (2017)).
4.1.3 FEATURE FUSION
In this second set of experiments, we evaluate the accuracy gain achieved by feature fusion, in order to isolate the contribution of fusion to the success of our feature interpolation scheme. As Table 3 demonstrates, utilizing any fusion strategy, whether max, average, or conv fusion, results in higher accuracy than using either input feature map alone. This holds true even when one feature map is significantly stronger than the other (rows 2-4), and for both short and long distances to the keyframes. This observed additive effect suggests that feature fusion is highly effective at capturing signal that appears in only one input feature map, and in merging spatial information across time.
5 CONCLUSION
We develop interpolation-BMV, a novel segmentation scheme that combines the use of block motion vectors for feature warping, bi-directional propagation to capture scene context, and feature fusion to produce accurate frame segmentations at high throughput. We evaluate on the CamVid and Cityscapes datasets, and demonstrate significant speedups across a range of accuracy levels, compared to both a strong single-frame baseline and prior work. Our methods are general, and represent an important advance in the effort to operate image models efficiently on video.
6 APPENDIX
6.1 SYSTEM DESIGN
We provide the formal statement of feature interpolation, Algorithm 2.
Algorithm 2 Feature interpolation with block motion vectors (inter-BMV) 1: input: video frames {Ii}, motion vectors mv, keyframe interval n 2: Wf ,Wb ← [] . forward, backward warped features 3: for frame Ii in {Ii} do 4: if i mod n == 0 then . keyframe 5: fi ← Nfeat(Ii) . curr keyframe features 6: Si ← Ntask(fi) 7: fi+n ← Nfeat(Ii+n) . next keyframe features 8: Wf ← PROPAGATE(fi, n− 1,−mv[i+ 1 : i+ n]) 9: Wb ← PROPAGATE(fi+n, n− 1,mv[i+ n : i+ 1]) 10: else . intermediate frame 11: p← i mod n . offset from prev keyframe 12: fi ← F (n−pn ·Wf [p], p n ·Wb[n− p]) . fuse propagated features 13: Si ← Ntask(fi) 14: end if 15: end for 16: output: frame segmentations {Si}
17: function PROPAGATE(features f , steps n, warp array g) . warp f for n steps with g 18: O ← [f ] 19: for i = 1 to n do 20: append(O,WARP(O[i− 1], g[i])) . warp features one step 21: end for 22: return O 23: end function
6.2 RESULTS
This appendix section includes full tabular results for the CamVid and Cityscapes datasets (Table 4 and Table 5) and minimum accuracy vs. throughput plots for CamVid and Cityscapes (Figure 5). | 1. How does the proposed segmentation method combine block motion vectors and feature fusion?
2. What are the concerns regarding the use of block motion vectors in the proposed method?
3. How does the reviewer assess the effectiveness of the proposed method compared to alternative approaches?
4. What additional experiments or justifications would help strengthen the paper's claims? | Review | Review
In this paper, the authors propose a novel segmentation scheme that combines the block motion vectors for feature warping, bi-directional propagation, and feature fusion. Experiments demonstrate its effectiveness compared with alternative methods. However, I still have several concern:
1. As the block motion vectors are generally rough estimation, it may damage the performance of the tasks. The authors should further clarify how the imperfect estimation influence the performance, e.g., the Blocking artifacts.
2. The features are actually abstract representation of an image while the motion vectors are actually obtained via the pixel comparison. The authors should further justify the motion estimation could be used to the latent feature directly.
3. The authors are expected to conduct more comprehensive experiments. Motion vectors are consistent in the current dataset. The authors are expected to demonstrate when the motion are chaotic. |
ICLR | Title
Inter-BMV: Interpolation with Block Motion Vectors for Fast Semantic Segmentation on Video
Abstract
Models optimized for accuracy on single images are often prohibitively slow to run on each frame in a video. Recent work exploits the use of optical flow to warp image features forward from select keyframes, as a means to conserve computation on video. This approach, however, achieves only limited speedup, even when optimized, due to the accuracy degradation introduced by repeated forward warping, and the inference cost of optical flow estimation. To address these problems, we propose a new scheme that propagates features using the block motion vectors (BMV) present in compressed video (e.g. H.264 codecs), instead of optical flow, and bi-directionally warps and fuses features from enclosing keyframes to capture scene context on each video frame. Our technique, interpolation-BMV, enables us to accurately estimate the features of intermediate frames, while keeping inference costs low. We evaluate our system on the CamVid and Cityscapes datasets, comparing to both a strong single-frame baseline and related work. We find that we are able to substantially accelerate segmentation on video, achieving near real-time frame rates (20+ frames per second) on large images (e.g. 960×720 pixels), while maintaining competitive accuracy. This represents an improvement of almost 6× over the single-frame baseline and 2.5× over the fastest prior work.
1 INTRODUCTION
Semantic segmentation, the task of assigning each pixel in an image to a semantic object class, is a problem of long-standing interest in computer vision. Like models for other image recognition tasks (e.g. classification, detection, instance segmentation), semantic segmentation networks have grown drastically in both layer depth and parameter count in recent years, in the race to segment more complex images, from larger, more realistic datasets, at higher accuracy. As a result, state-of-the-art segmentation networks today require between 0.5 to 3.0 seconds to segment a single, high-resolution image (e.g. 2048× 1024 pixels) at competitive accuracy (Zhu et al. (2017); Gadde et al. (2017)). Meanwhile, a new target data format for segmentation has emerged: video. The motivating use cases include both batch applications, where video is segmented in bulk to generate training data for other models (e.g. autonomous control systems), and streaming settings, where high-throughput video segmentation enables interactive analysis of live footage (e.g. at surveillance sites). Video in these contexts consists of long image sequences, shot at high frame rates (e.g. 30 fps) in complex environments (e.g. urban cityscapes) on modern, high-definition cameras. Segmenting individual frames at high accuracy still calls for the use of competitive image models, but their inference cost precludes their naı̈ve deployment on every frame in a raw multi-hour video stream.
A defining characteristic of realistic video is its high level of temporal continuity. Consecutive frames demonstrate significant spatial similarity, which suggests the potential to reuse computation across frames. Building on prior work, we exploit two observations: 1) higher-level features evolve more slowly than raw pixel content in video, and 2) feature computation tends to be much more expensive than task-specific computation across a range of vision tasks (e.g. detection, segmentation) (Shelhamer et al. (2016); Zhu et al. (2017)). Accordingly, we divide our semantic segmentation model into a deep feature network and a cheap, shallow task network (Zhu et al. (2017)). We compute features only on designated keyframes, and propagate them to intermediate frames, by warping the feature maps with frame-to-frame motion estimates. The task network is executed on all frames.
Nfeat
Ntask Ntask
Nfeat
Ntask Ntask Ntask
F F F
keyframe k keyframe k+nintermediate frames
… …
W W W
WWW
block motion vectors
… …
Given that feature warping and task computation is much cheaper than feature extraction, a key parameter we aim to optimize is the interval between designated keyframes.
Here we make two key contributions. First, noting the high level of data redundancy in video, we successfully utilize an artifact of compressed video, block motion vectors (BMV), to cheaply propagate features from frame to frame. Unlike other motion estimation techniques, which require specialized convolutional networks, block motion vectors are freely available in modern video formats, making for a simple, fast design. Second, we propose a novel feature estimation technique that enables the features for a large fraction of video frames to be inferred accurately and efficiently (see Fig. 1). In particular, when computing the segmentation for a keyframe, we also precompute the features for the next designated keyframe. Features for all subsequent intermediate frames are then computed as a fusion of features warped forward from the last visited keyframe, and features warped backward from the incoming keyframe. This procedure implements an interpolation of the features of the two closest keyframes. We then combine the two ideas, using block motion vectors to perform the feature warping in feature interpolation. The result is a scheme we call interpolation-BMV.
We evaluate our framework on the CamVid and Cityscapes datasets. Our baseline consists of running a competitive segmentation network, DeepLab (Chen et al. (2017)), on every frame, a setup that achieves published accuracy (Dai et al. (2017)), and throughput of 3.6 frames per second (fps) on CamVid and 1.3 fps on Cityscapes. Our improvements come in two phases. First, our use of motion vectors for feature propagation allow us to cut inference time on intermediate frames by 53%, compared to approaches based on optical-flow, such as Zhu et al. (2017). Second, our bi-directional feature warping and fusion scheme achieves substantial accuracy improvements, especially at high keyframe intervals. Together, the two techniques allow us to operate at over twice the average inference speed as the fastest prior work, at any target level of accuracy. For example, if we are willing to tolerate no worse than 65 mIoU on our CamVid video stream, we are able to operate at a throughput of 20.1 fps, compared to the 8.0 fps achieved by the forward flow-based propagation from Zhu et al. (2017). Overall, even when operating in high accuracy regimes (e.g. within 3% mIoU of the baseline), we are able to accelerate segmentation on video by a factor of 2-6×.
2 RELATED WORK
2.1 IMAGE SEMANTIC SEGMENTATION
Semantic segmentation is a classical image recognition task in computer vision, originally studied in the context of statistical inference. The approach of choice was to propagate evidence about pixel class assignments through a probabilistic graphical model (Felzenszwalb & Huttenlocher (2004); Shotton et al. (2009)), a technique that scaled poorly to large images with numerous object classes (Krähenbühl & Koltun (2011)). In 2014, Long et al. (2015) proposed the use of fully convolutional neural networks (FCNs) to segment images, demonstrating significant accuracy gains on several key datasets. Subsequent work embraced the FCN architecture, proposing augmentations such as
dilated (atrous) convolutions (Yu & Koltun (2016)), post-processing CRFs (Chen et al. (2016)), and pyramid spatial pooling (Zhao et al. (2017)) to further improve accuracy on large, complex images.
2.2 EFFICIENT VIDEO SEMANTIC SEGMENTATION
The recent rise of applications such as autonomous driving, industrial robotics, and automated video surveillance, where agents must perceive and understand the visual world as it evolves, has triggered substantial interest in the problem of efficient video semantic segmentation. Shelhamer et al. (2016) and Zhu et al. (2017) proposed basic feature reuse and optical flow-based feature warping, respectively, to reduce the inference cost of running expensive image segmentation models on video. Recent work explores adaptive feature propagation, partial feature updating, and adaptive keyframe selection as techniques to further optimize the scheduling and execution of optical-flow based warping (Zhu et al. (2018); Li et al. (2018); Xu et al. (2018)). In general, these techniques fall short in two respects: (1) optical flow computation remains a computational bottleneck, especially as other network components become cheaper, and (2) forward feature propagation fails to account for other forms of temporal change, besides spatial displacement, such as new scene content (e.g. new objects), perspective changes (e.g. camera pans), and observer movement (e.g. in driving footage). As a result, full frame features must still be recomputed frequently to maintain accuracy, especially in video footage with complex dynamics, fundamentally limiting the attainable speedup.
2.3 MOTION AND COMPRESSED VIDEO
Wu et al. (2018) train a network directly on compressed video to improve both accuracy and performance on video action recognition. Zhang et al. (2016) replace the optical flow network in the classical two-stream architecture (Simonyan & Zisserman (2014)) with a “motion vector CNN”, but encounter accuracy challenges, which they address with various transfer learning schemes. Unlike these works, our main focus is not efficient training, nor reducing the physical size of input data to strengthen the underlying signal for video-level tasks, such as action recognition. We instead focus on a class of dense prediction tasks, notably semantic segmentation, that involve high-dimensional output (e.g. a class prediction for every pixel in an image) generated on the original uncompressed frames of a video. This means that we must still process each frame in isolation. To the best of our knowledge, we are the first to propose the use of compressed video artifacts to warp deep neural representations, with the goal of drastically improved inference throughput on realistic video.
3 SYSTEM OVERVIEW
3.1 NETWORK ARCHITECTURE
We follow the common practice of adapting a competitive image classification model (e.g. ResNet101) into a fully convolutional network trained on the semantic segmentation task (Long et al. (2015); Yu et al. (2017); Chen et al. (2017)). We identify two logical components in our final model: a feature network, which takes as input an image i ∈ R1×3×h×w and outputs a representation fi ∈ R1×A× h 16× w 16 , and a task network, which given the representation, computes class predictions for each pixel in the image, pi ∈ R1×C×h×w. The task network Ntask is built by concatenating three blocks: (1) a feature projection block, which reduces the feature channel dimensionality to A 2 , (2) a scoring block, which predicts scores for each of the C segmentation classes, and (3) an upsampling block, which bilinearly upsamples the score maps to the resolution of the input image.
3.2 BLOCK MOTION VECTORS
MPEG-compressed video consists of two logical components: reference frames, called I-frames, and delta frames, called P-frames. Reference frames are still RGB frames from the video, usually represented as spatially-compressed JPEG images. Delta frames, which introduce temporal compression to video, consist of two subcomponents: block motion vectors and residuals.
Block motion vectors, the artifact of interest in our current work, define a correspondence between pixels in the current frame and pixels in the previous frame. They are generated using block motion compensation, a standard procedure in video compression algorithms (Richardson (2008)):
1. Divide the current frame into a non-overlapping grid of 16x16 pixel blocks.
2. For each block in the current frame, determine the “best matching” block in the previous frame. A common matching metric is to minimize mean squared error between the blocks.
3. For each block in the current frame, represent the pixel offset to the best matching block in the previous frame as an (x, y) coordinate pair, or motion vector.
The resulting grid of (x, y) offsets forms the block motion vector map for the current frame. For a 16M × 16N frame, this map has dimensions M ×N . The residuals then consist of the pixel-level difference between the current frame, and the previous frame transformed by the motion vectors.
3.3 FEATURE PROPAGATION
Many cameras compress video by default as a means for efficient storage and transmission. The availability of a free form of motion estimation at inference time, the motion vector maps in MPEGcompressed video, suggests the following scheme for fast video segmentation (see Algorithm 1).
Algorithm 1 Feature propagation with block motion vectors (prop-BMV) 1: input: video frames {Ii}, motion vectors mv, keyframe interval n 2: for frame Ii in {Ii} do 3: if i mod n = 0 then . keyframe 4: fi ← Nfeat(Ii) . keyframe features 5: Si ← Ntask(fi) 6: else . intermediate frame 7: fi ← WARP(fc,−mv[i])) . warp cached features 8: Si ← Ntask(fi) 9: end if 10: fc ← fi . cache features 11: end for 12: output: frame segmentations {Si}
Choose a keyframe interval n. On keyframes (every nth frame), execute the feature network Nfeat to obtain a feature map. Cache these computed features, fc, and then execute the task network Ntask to obtain the keyframe segmentation. On intermediate frames, extract the motion vectors mv[i] corresponding to the current frame index. Warp the cached features fc one frame forward via bilinear interpolation with −mv[i]. (To warp forward, we apply the negation of the vector map.) Here we employ the differentiable, parameter-free spatial warping operator proposed by Jaderberg et al. (2015). Finally, execute Ntask on the warped features to obtain the current segmentation.
3.3.1 INFERENCE RUNTIME ANALYSIS
Feature propagation is effective because it relegates feature extraction, the most expensive network component, to select keyframes. Of the three remaining operations performed on intermediate frames – motion estimation, feature warping, and task execution – motion estimation with optical flow is the most expensive (see Fig. 2). By using block motion, we eliminate this remaining bottleneck, accelerating inference times on intermediate frames for a DeepLab segmentation network (Chen et al. (2017)) from 116 ms per frame (F +W +Ntask) to 54 ms per frame (W +Ntask). For keyframe interval n, this translates to a speedup of 53% on n−1n of the video frames.
Note that for a given keyframe interval n, as we reduce inference time on intermediate frames to zero, we approach a maximum attainable speedup factor of n over a frame-by-frame baseline that runs the full model on every frame. Exceeding this bound, without compromising on accuracy, requires an entirely new approach to feature estimation, the subject of the next section.
Incidentally, we also benchmarked the time required to extract block motion vectors from raw video (i.e. H.264 compression time), and found that ffmpeg takes 2.78 seconds to compress 1,000 Cityscapes video frames, or 2.78 ms per frame. In contrast, optical flow computation on a frame pair takes 62 ms (Fig. 2). We include this comparison for completeness: since compression is a default behavior on modern cameras, block motion extraction is not a true component of inference time.
3.4 FEATURE INTERPOLATION
Given an input video stream, our goal is to compute the segmentation of every frame as efficiently as possible, while preserving accuracy. In a batch setting, we have access to the entire video, and desire the segmentations for all the frames, as input to another model (e.g. an autonomous control system). In a streaming setting, we have access to frames as they come in, but may be willing to tolerate a small delay of keyframe interval n frames ( n30 seconds at 30 fps) before we output a segmentation, if that means we can match the throughput of the video stream and maintain high accuracy.
We make two observations. First, all intermediate frames in a video by definition lie between two designated keyframes, which represent bounds on the current scene. New objects that are missed in forward feature propagation schemes are more likely to be captured if both past and incoming keyframes are used. Second, feature fusion techniques are effective at preserving strong signals in any one input feature map, as seen in Feichtenhofer et al. (2016). This suggests the viability of estimating the features of intermediate frames as the fusion of the features of enclosing keyframes.
Expanding on this idea, we propose the following algorithm (see Fig. 1). On any given keyframe, precompute the features for the next keyframe. On intermediate frames, warp the previous keyframe’s features, Nfeat(Ik), forward to the current frame Ii using incremental forward motion estimates, −mv[k : i]. Warp the next keyframe’s features, Nfeat(Ik+n), backward to the current frame using incremental backward motion estimates, mv[k+n : i]. Fuse the two feature maps using either a weighted average or learned fusion operator, F . Then execute the task network Ntask on the fused features. This forms Algorithm 2. A formal statement is included in Appendix: Sec. 6.1.
To eliminate redundant computation, on keyframes, we precompute forward and backward warped feature maps ff , f b corresponding to each subsequent intermediate frame, {Ik+1, ..., Ik+n−1}. For keyframe interval n, this amounts to n− 1 forward and n− 1 backward warped feature maps.
3.4.1 FEATURE FUSION
We consider several possible fusion operators: max fusion, average fusion, and convolutional fusion (Feichtenhofer et al. (2016)). We implement max and average fusion by aligning the input feature maps ff , f b ∈ R1×C×h×w along the channel dimension, and computing a max or average across each pixel in corresponding channels, a parameter-free operation. We implement conv fusion by stacking the input feature maps along the channel dimension [ff , f b]C = fs ∈ R1×2C×h×w, and applying a bank of learned, 1x1 conv filters to reduce the channel dimensionality by a factor of two.
Before applying the fusion operator, we weight the two input feature maps ff , f b by scalars α and 1 − α, respectively, that correspond to feature relevance, a scheme that works very effectively in practice. For keyframe interval n, and a frame at offsets p and n − p from the previous and next keyframes, respectively, we set α = n−pn and 1 − α = p n , thereby penalizing the input features warped farther from their keyframe. Thus, when p is small relative to n, we weight the previous keyframe’s features more heavily, and vice versa. In summary, the features for intermediate frame Ii are set to: fi = F (n−pn f f , pnf b), where p = i mod n. This scheme is reflected in Alg. 2.
4 EXPERIMENTS
4.0.1 DATASETS
We train and evaluate our system on CamVid (Brostow et al. (2009)) and Cityscapes (Cordts et al. (2016)), two popular, large-scale datasets for complex urban scene understanding. CamVid consists
of over 10 minutes of footage captured at 30 fps and 960 × 720 pixels. Cityscapes consists of 30- frame video snippets shot at 17 fps and 2048 × 1024 pixels. On CamVid, we adopt the standard train-test split of Sturgess et al. (2009). On Cityscapes, we train on the train split and evaluate on the val split, following the example of previous work (Yu et al. (2017); Chen et al. (2017); Zhu et al. (2017)). We use the standard mean intersection-over-union (mIoU) metric to evaluate segmentation accuracy, and measure throughput in frames per second (fps) to evaluate inference performance.
4.0.2 ARCHITECTURE
For our segmentation network, we adopt a variant of the DeepLab architecture called Deformable DeepLab (Dai et al. (2017)), which employs deformable convolutions in the last ResNet block (conv5) to achieve significantly higher accuracy at comparable inference cost to a standard DeepLab model. DeepLab (Chen et al. (2017)) is widely considered a state-of-the-art architecture for semantic segmentation, and a DeepLab implementation currently ranks first on the PASCAL VOC object segmentation challenge (Aytar (2018)). Our DeepLab model uses ResNet-101 as its feature network, which produces intermediate representations fi ∈ R1×2048× h 16× w 16 . The DeepLab task network outputs predictions pi ∈ R1×C×h×w, where C is 12 or 20 for CamVid and Cityscapes respectively.
4.0.3 TRAINING
To train our single-frame DeepLab model, we initialize with an ImageNet-trained ResNet-101 model, and learn task-specific weights on the CamVid and Cityscapes train sets. To train our video segmentation system, we sample at random a labeled image from the train set, and select a preceding and succeeding frame to serve as the previous and next keyframe, respectively. Since motion estimation with block motion vectors and feature warping are both parameter-free, feature propagation introduces no additional weights. Training feature interpolation with convolutional fusion, however, involves learning weights for the 1x1 conv fusion layer, which is applied to stacked feature maps, each with channel dimension 2048. For both schemes, we train with SGD on an AWS EC2 instance with 4 Tesla K80 GPUs for 50 epochs, starting with a learning rate of 10−3.
4.1 RESULTS
4.1.1 BASELINE
For our accuracy and performance baseline, we evaluate our full DeepLab model on every labeled frame in the CamVid and Cityscapes test splits. Our baseline achieves an accuracy of 68.6 mIoU on CamVid, at a throughput of 3.7 fps. On Cityscapes, the baseline model achieves 75.2 mIoU, matching published results for the DeepLab architecture we used (Dai et al. (2017)), at 1.3 fps.
4.1.2 PROPAGATION AND INTERPOLATION
In this section, we evaluate our two main contributions: 1) feature propagation with block motion vectors (prop-BMV), and 2) feature interpolation, our new feature estimation scheme, implemented with block motion vectors (inter-BMV). We compare to the closest available existing work on the problem, a feature propagation scheme based on optical flow (Zhu et al. (2017)) (prop-flow). We evaluate by comparing accuracy-runtime curves for the three approaches on CamVid and Cityscapes (see Fig. 3). These curves are generated by plotting accuracy against throughput at each keyframe interval in Appendix: Tables 4 and 5, which contain comprehensive results.
First, we note that block motion-based feature propagation (prop-BMV) outperforms optical flowbased propagation (prop-flow) at all but the lowest throughputs. While motion vectors are slightly less accurate than optical flow in general, by cutting inference times by 53% on intermediate frames (Sec. 3.3.1), prop-BMV enables operation at much lower keyframe intervals than optical flow to achieve the same inference rates. This results in a much more favorable accuracy-throughput curve.
Second, we find that our feature interpolation scheme (inter-BMV) strictly outperforms both feature propagation schemes. At every keyframe interval, inter-BMV is more accurate than prop-flow and prop-BMV; moreover, it operates at similar throughput to prop-BMV. This translates to a consistent advantage over prop-BMV, and an even larger advantage over prop-flow (see Fig. 3). On CamVid, inter-BMV actually registers a small accuracy gain over the baseline at keyframe intervals 2 and 3, utilizing multi-frame context to improve on the accuracy of the single-frame DeepLab model.
Metrics. We also distinguish between two metrics: the standard average accuracy, results for which are plotted in Fig. 3, and minimum accuracy, which is a measure of the lowest frame-level accuracy an approach entails, i.e. accuracy on frames farthest away from keyframes. Minimum accuracy is the appropriate metric to consider when we wish to segment a video as efficiently as possible, while ensuring that all frame segmentations meet some threshold level of accuracy. As an example, at an accuracy target of 65 mIoU, feature interpolation enables operation at 20.1 fps on CamVid (see Table 4). This is 2.5× faster than achievable inference speeds with feature propagation alone, using either optical flow (8.0 fps) or block motion vectors (9.3 fps). In general, feature interpolation achieves over twice the throughput as Zhu et al. (2017) on CamVid and Cityscapes, at any target accuracy. Minimum accuracy plots (Fig. 5) are included in the Appendix.
Baseline. We also compare to our frame-by-frame DeepLab baseline, which offers low throughput but high average accuracy. As Figures 3a and 3b indicate, even at average accuracies above 68 mIoU on CamVid and 70 mIoU on Cityscapes, figures competitive with contemporary single-frame models (see Table 1 and Table 2), feature interpolation offers speedups of 4.5× and 4.2×, respectively, over the baseline. Notably, at key interval 3, interpolation obtains a 2.5× speedup over the baseline on CamVid, at slightly higher than baseline accuracy (see Fig. 3a and Table 4).
Delay. Recall that feature interpolation introduces a delay of keyframe interval n frames, which corresponds to n30 seconds at 30 fps. For example, at n = 3, inter-BMV introduces a delay of 3 30
seconds, or 100 ms. To put this in context, prop-flow (Zhu et al. (2017)) takes 125 ms to segment a frame at key interval 3, and inter-BMV takes 110 ms. Thus, by lagging by less than 1 segmentation, we are able to segment 2.5× more frames per hour than the frame-by-frame model (9.1 fps vs. 3.6 fps). This is a suitable tradeoff in almost all batch settings (e.g. training data generation, post-hoc video analysis), and in many interactive applications (e.g. video anomaly detection, film editing).
Fig. 4 depicts a qualitative comparison of interpolation and prop-flow (Zhu et al. (2017)).
4.1.3 FEATURE FUSION
In this second set of experiments, we evaluate the accuracy gain achieved by feature fusion, in order to isolate the contribution of fusion to the success of our feature interpolation scheme. As Table 3 demonstrates, utilizing any fusion strategy, whether max, average, or conv fusion, results in higher accuracy than using either input feature map alone. This holds true even when one feature map is significantly stronger than the other (rows 2-4), and for both short and long distances to the keyframes. This observed additive effect suggests that feature fusion is highly effective at capturing signal that appears in only one input feature map, and in merging spatial information across time.
5 CONCLUSION
We develop interpolation-BMV, a novel segmentation scheme that combines the use of block motion vectors for feature warping, bi-directional propagation to capture scene context, and feature fusion to produce accurate frame segmentations at high throughput. We evaluate on the CamVid and Cityscapes datasets, and demonstrate significant speedups across a range of accuracy levels, compared to both a strong single-frame baseline and prior work. Our methods are general, and represent an important advance in the effort to operate image models efficiently on video.
6 APPENDIX
6.1 SYSTEM DESIGN
We provide the formal statement of feature interpolation, Algorithm 2.
Algorithm 2 Feature interpolation with block motion vectors (inter-BMV) 1: input: video frames {Ii}, motion vectors mv, keyframe interval n 2: Wf ,Wb ← [] . forward, backward warped features 3: for frame Ii in {Ii} do 4: if i mod n == 0 then . keyframe 5: fi ← Nfeat(Ii) . curr keyframe features 6: Si ← Ntask(fi) 7: fi+n ← Nfeat(Ii+n) . next keyframe features 8: Wf ← PROPAGATE(fi, n− 1,−mv[i+ 1 : i+ n]) 9: Wb ← PROPAGATE(fi+n, n− 1,mv[i+ n : i+ 1]) 10: else . intermediate frame 11: p← i mod n . offset from prev keyframe 12: fi ← F (n−pn ·Wf [p], p n ·Wb[n− p]) . fuse propagated features 13: Si ← Ntask(fi) 14: end if 15: end for 16: output: frame segmentations {Si}
17: function PROPAGATE(features f , steps n, warp array g) . warp f for n steps with g 18: O ← [f ] 19: for i = 1 to n do 20: append(O,WARP(O[i− 1], g[i])) . warp features one step 21: end for 22: return O 23: end function
6.2 RESULTS
This appendix section includes full tabular results for the CamVid and Cityscapes datasets (Table 4 and Table 5) and minimum accuracy vs. throughput plots for CamVid and Cityscapes (Figure 5). | 1. What is the focus of the paper regarding semantic segmentation?
2. What are the strengths of the proposed approach, particularly its efficiency and ablation study?
3. What are the weaknesses of the paper, such as limited novelty and insufficient comparisons with other works?
4. Do you have any questions about the technical details or explanations provided in the paper? | Review | Review
This paper presents a feature interpolation strategy for fast semantic segmentation in videos. They first compute features of keyframes, then interpolate intermediate frames based on block-motion vectors (BMV), and finally fuse the interpolated features as input to the prediction network. The experiments show that the model outperforms one recent, closely related work wrt inference time while preserving accuracy.
Positive:
1. Efficient inference. The strategy cuts inference time on intermediate frames by 53%, while achieves better accuracy and IOU compared to the one recent closely related work.
2. The ablation study seems sufficient and well-designed. The paper presents two feature propagation strategies and three feature fusion methods. The experiments compare these different settings, and show that interpolation-BMV is indeed a better feature propagation.
Negative:
1. Limited novelty. The algorithm is close to the optical-flow based models Shelhamer et al. (2016) and Zhu et al. (2017). The main difference is that the optical-flow is replaced with BMV, which is a byproduct of modern cameras.
2. Insufficient experimental comparison with other baselines. In experiments, the paper compares the proposed model with only one baseline Prop-flow, which is not a sufficient comparison to show that the paper really outperforms the state-of-art model. For example, the authors should also compare with “Clockwork convnets for video semantic segmentation.”
3. Some technical details are not clear. For example, in section 3.1, the paper mentions that the task network is built by concatenating three components but never clarifies them. Also, in algorithm 2, line 13 shows that F is a function with two entries, but line 8 indicates that F is a feature. |
ICLR | Title
Inter-BMV: Interpolation with Block Motion Vectors for Fast Semantic Segmentation on Video
Abstract
Models optimized for accuracy on single images are often prohibitively slow to run on each frame in a video. Recent work exploits the use of optical flow to warp image features forward from select keyframes, as a means to conserve computation on video. This approach, however, achieves only limited speedup, even when optimized, due to the accuracy degradation introduced by repeated forward warping, and the inference cost of optical flow estimation. To address these problems, we propose a new scheme that propagates features using the block motion vectors (BMV) present in compressed video (e.g. H.264 codecs), instead of optical flow, and bi-directionally warps and fuses features from enclosing keyframes to capture scene context on each video frame. Our technique, interpolation-BMV, enables us to accurately estimate the features of intermediate frames, while keeping inference costs low. We evaluate our system on the CamVid and Cityscapes datasets, comparing to both a strong single-frame baseline and related work. We find that we are able to substantially accelerate segmentation on video, achieving near real-time frame rates (20+ frames per second) on large images (e.g. 960×720 pixels), while maintaining competitive accuracy. This represents an improvement of almost 6× over the single-frame baseline and 2.5× over the fastest prior work.
1 INTRODUCTION
Semantic segmentation, the task of assigning each pixel in an image to a semantic object class, is a problem of long-standing interest in computer vision. Like models for other image recognition tasks (e.g. classification, detection, instance segmentation), semantic segmentation networks have grown drastically in both layer depth and parameter count in recent years, in the race to segment more complex images, from larger, more realistic datasets, at higher accuracy. As a result, state-of-the-art segmentation networks today require between 0.5 to 3.0 seconds to segment a single, high-resolution image (e.g. 2048× 1024 pixels) at competitive accuracy (Zhu et al. (2017); Gadde et al. (2017)). Meanwhile, a new target data format for segmentation has emerged: video. The motivating use cases include both batch applications, where video is segmented in bulk to generate training data for other models (e.g. autonomous control systems), and streaming settings, where high-throughput video segmentation enables interactive analysis of live footage (e.g. at surveillance sites). Video in these contexts consists of long image sequences, shot at high frame rates (e.g. 30 fps) in complex environments (e.g. urban cityscapes) on modern, high-definition cameras. Segmenting individual frames at high accuracy still calls for the use of competitive image models, but their inference cost precludes their naı̈ve deployment on every frame in a raw multi-hour video stream.
A defining characteristic of realistic video is its high level of temporal continuity. Consecutive frames demonstrate significant spatial similarity, which suggests the potential to reuse computation across frames. Building on prior work, we exploit two observations: 1) higher-level features evolve more slowly than raw pixel content in video, and 2) feature computation tends to be much more expensive than task-specific computation across a range of vision tasks (e.g. detection, segmentation) (Shelhamer et al. (2016); Zhu et al. (2017)). Accordingly, we divide our semantic segmentation model into a deep feature network and a cheap, shallow task network (Zhu et al. (2017)). We compute features only on designated keyframes, and propagate them to intermediate frames, by warping the feature maps with frame-to-frame motion estimates. The task network is executed on all frames.
Nfeat
Ntask Ntask
Nfeat
Ntask Ntask Ntask
F F F
keyframe k keyframe k+nintermediate frames
… …
W W W
WWW
block motion vectors
… …
Given that feature warping and task computation is much cheaper than feature extraction, a key parameter we aim to optimize is the interval between designated keyframes.
Here we make two key contributions. First, noting the high level of data redundancy in video, we successfully utilize an artifact of compressed video, block motion vectors (BMV), to cheaply propagate features from frame to frame. Unlike other motion estimation techniques, which require specialized convolutional networks, block motion vectors are freely available in modern video formats, making for a simple, fast design. Second, we propose a novel feature estimation technique that enables the features for a large fraction of video frames to be inferred accurately and efficiently (see Fig. 1). In particular, when computing the segmentation for a keyframe, we also precompute the features for the next designated keyframe. Features for all subsequent intermediate frames are then computed as a fusion of features warped forward from the last visited keyframe, and features warped backward from the incoming keyframe. This procedure implements an interpolation of the features of the two closest keyframes. We then combine the two ideas, using block motion vectors to perform the feature warping in feature interpolation. The result is a scheme we call interpolation-BMV.
We evaluate our framework on the CamVid and Cityscapes datasets. Our baseline consists of running a competitive segmentation network, DeepLab (Chen et al. (2017)), on every frame, a setup that achieves published accuracy (Dai et al. (2017)), and throughput of 3.6 frames per second (fps) on CamVid and 1.3 fps on Cityscapes. Our improvements come in two phases. First, our use of motion vectors for feature propagation allow us to cut inference time on intermediate frames by 53%, compared to approaches based on optical-flow, such as Zhu et al. (2017). Second, our bi-directional feature warping and fusion scheme achieves substantial accuracy improvements, especially at high keyframe intervals. Together, the two techniques allow us to operate at over twice the average inference speed as the fastest prior work, at any target level of accuracy. For example, if we are willing to tolerate no worse than 65 mIoU on our CamVid video stream, we are able to operate at a throughput of 20.1 fps, compared to the 8.0 fps achieved by the forward flow-based propagation from Zhu et al. (2017). Overall, even when operating in high accuracy regimes (e.g. within 3% mIoU of the baseline), we are able to accelerate segmentation on video by a factor of 2-6×.
2 RELATED WORK
2.1 IMAGE SEMANTIC SEGMENTATION
Semantic segmentation is a classical image recognition task in computer vision, originally studied in the context of statistical inference. The approach of choice was to propagate evidence about pixel class assignments through a probabilistic graphical model (Felzenszwalb & Huttenlocher (2004); Shotton et al. (2009)), a technique that scaled poorly to large images with numerous object classes (Krähenbühl & Koltun (2011)). In 2014, Long et al. (2015) proposed the use of fully convolutional neural networks (FCNs) to segment images, demonstrating significant accuracy gains on several key datasets. Subsequent work embraced the FCN architecture, proposing augmentations such as
dilated (atrous) convolutions (Yu & Koltun (2016)), post-processing CRFs (Chen et al. (2016)), and pyramid spatial pooling (Zhao et al. (2017)) to further improve accuracy on large, complex images.
2.2 EFFICIENT VIDEO SEMANTIC SEGMENTATION
The recent rise of applications such as autonomous driving, industrial robotics, and automated video surveillance, where agents must perceive and understand the visual world as it evolves, has triggered substantial interest in the problem of efficient video semantic segmentation. Shelhamer et al. (2016) and Zhu et al. (2017) proposed basic feature reuse and optical flow-based feature warping, respectively, to reduce the inference cost of running expensive image segmentation models on video. Recent work explores adaptive feature propagation, partial feature updating, and adaptive keyframe selection as techniques to further optimize the scheduling and execution of optical-flow based warping (Zhu et al. (2018); Li et al. (2018); Xu et al. (2018)). In general, these techniques fall short in two respects: (1) optical flow computation remains a computational bottleneck, especially as other network components become cheaper, and (2) forward feature propagation fails to account for other forms of temporal change, besides spatial displacement, such as new scene content (e.g. new objects), perspective changes (e.g. camera pans), and observer movement (e.g. in driving footage). As a result, full frame features must still be recomputed frequently to maintain accuracy, especially in video footage with complex dynamics, fundamentally limiting the attainable speedup.
2.3 MOTION AND COMPRESSED VIDEO
Wu et al. (2018) train a network directly on compressed video to improve both accuracy and performance on video action recognition. Zhang et al. (2016) replace the optical flow network in the classical two-stream architecture (Simonyan & Zisserman (2014)) with a “motion vector CNN”, but encounter accuracy challenges, which they address with various transfer learning schemes. Unlike these works, our main focus is not efficient training, nor reducing the physical size of input data to strengthen the underlying signal for video-level tasks, such as action recognition. We instead focus on a class of dense prediction tasks, notably semantic segmentation, that involve high-dimensional output (e.g. a class prediction for every pixel in an image) generated on the original uncompressed frames of a video. This means that we must still process each frame in isolation. To the best of our knowledge, we are the first to propose the use of compressed video artifacts to warp deep neural representations, with the goal of drastically improved inference throughput on realistic video.
3 SYSTEM OVERVIEW
3.1 NETWORK ARCHITECTURE
We follow the common practice of adapting a competitive image classification model (e.g. ResNet101) into a fully convolutional network trained on the semantic segmentation task (Long et al. (2015); Yu et al. (2017); Chen et al. (2017)). We identify two logical components in our final model: a feature network, which takes as input an image i ∈ R1×3×h×w and outputs a representation fi ∈ R1×A× h 16× w 16 , and a task network, which given the representation, computes class predictions for each pixel in the image, pi ∈ R1×C×h×w. The task network Ntask is built by concatenating three blocks: (1) a feature projection block, which reduces the feature channel dimensionality to A 2 , (2) a scoring block, which predicts scores for each of the C segmentation classes, and (3) an upsampling block, which bilinearly upsamples the score maps to the resolution of the input image.
3.2 BLOCK MOTION VECTORS
MPEG-compressed video consists of two logical components: reference frames, called I-frames, and delta frames, called P-frames. Reference frames are still RGB frames from the video, usually represented as spatially-compressed JPEG images. Delta frames, which introduce temporal compression to video, consist of two subcomponents: block motion vectors and residuals.
Block motion vectors, the artifact of interest in our current work, define a correspondence between pixels in the current frame and pixels in the previous frame. They are generated using block motion compensation, a standard procedure in video compression algorithms (Richardson (2008)):
1. Divide the current frame into a non-overlapping grid of 16x16 pixel blocks.
2. For each block in the current frame, determine the “best matching” block in the previous frame. A common matching metric is to minimize mean squared error between the blocks.
3. For each block in the current frame, represent the pixel offset to the best matching block in the previous frame as an (x, y) coordinate pair, or motion vector.
The resulting grid of (x, y) offsets forms the block motion vector map for the current frame. For a 16M × 16N frame, this map has dimensions M ×N . The residuals then consist of the pixel-level difference between the current frame, and the previous frame transformed by the motion vectors.
3.3 FEATURE PROPAGATION
Many cameras compress video by default as a means for efficient storage and transmission. The availability of a free form of motion estimation at inference time, the motion vector maps in MPEGcompressed video, suggests the following scheme for fast video segmentation (see Algorithm 1).
Algorithm 1 Feature propagation with block motion vectors (prop-BMV) 1: input: video frames {Ii}, motion vectors mv, keyframe interval n 2: for frame Ii in {Ii} do 3: if i mod n = 0 then . keyframe 4: fi ← Nfeat(Ii) . keyframe features 5: Si ← Ntask(fi) 6: else . intermediate frame 7: fi ← WARP(fc,−mv[i])) . warp cached features 8: Si ← Ntask(fi) 9: end if 10: fc ← fi . cache features 11: end for 12: output: frame segmentations {Si}
Choose a keyframe interval n. On keyframes (every nth frame), execute the feature network Nfeat to obtain a feature map. Cache these computed features, fc, and then execute the task network Ntask to obtain the keyframe segmentation. On intermediate frames, extract the motion vectors mv[i] corresponding to the current frame index. Warp the cached features fc one frame forward via bilinear interpolation with −mv[i]. (To warp forward, we apply the negation of the vector map.) Here we employ the differentiable, parameter-free spatial warping operator proposed by Jaderberg et al. (2015). Finally, execute Ntask on the warped features to obtain the current segmentation.
3.3.1 INFERENCE RUNTIME ANALYSIS
Feature propagation is effective because it relegates feature extraction, the most expensive network component, to select keyframes. Of the three remaining operations performed on intermediate frames – motion estimation, feature warping, and task execution – motion estimation with optical flow is the most expensive (see Fig. 2). By using block motion, we eliminate this remaining bottleneck, accelerating inference times on intermediate frames for a DeepLab segmentation network (Chen et al. (2017)) from 116 ms per frame (F +W +Ntask) to 54 ms per frame (W +Ntask). For keyframe interval n, this translates to a speedup of 53% on n−1n of the video frames.
Note that for a given keyframe interval n, as we reduce inference time on intermediate frames to zero, we approach a maximum attainable speedup factor of n over a frame-by-frame baseline that runs the full model on every frame. Exceeding this bound, without compromising on accuracy, requires an entirely new approach to feature estimation, the subject of the next section.
Incidentally, we also benchmarked the time required to extract block motion vectors from raw video (i.e. H.264 compression time), and found that ffmpeg takes 2.78 seconds to compress 1,000 Cityscapes video frames, or 2.78 ms per frame. In contrast, optical flow computation on a frame pair takes 62 ms (Fig. 2). We include this comparison for completeness: since compression is a default behavior on modern cameras, block motion extraction is not a true component of inference time.
3.4 FEATURE INTERPOLATION
Given an input video stream, our goal is to compute the segmentation of every frame as efficiently as possible, while preserving accuracy. In a batch setting, we have access to the entire video, and desire the segmentations for all the frames, as input to another model (e.g. an autonomous control system). In a streaming setting, we have access to frames as they come in, but may be willing to tolerate a small delay of keyframe interval n frames ( n30 seconds at 30 fps) before we output a segmentation, if that means we can match the throughput of the video stream and maintain high accuracy.
We make two observations. First, all intermediate frames in a video by definition lie between two designated keyframes, which represent bounds on the current scene. New objects that are missed in forward feature propagation schemes are more likely to be captured if both past and incoming keyframes are used. Second, feature fusion techniques are effective at preserving strong signals in any one input feature map, as seen in Feichtenhofer et al. (2016). This suggests the viability of estimating the features of intermediate frames as the fusion of the features of enclosing keyframes.
Expanding on this idea, we propose the following algorithm (see Fig. 1). On any given keyframe, precompute the features for the next keyframe. On intermediate frames, warp the previous keyframe’s features, Nfeat(Ik), forward to the current frame Ii using incremental forward motion estimates, −mv[k : i]. Warp the next keyframe’s features, Nfeat(Ik+n), backward to the current frame using incremental backward motion estimates, mv[k+n : i]. Fuse the two feature maps using either a weighted average or learned fusion operator, F . Then execute the task network Ntask on the fused features. This forms Algorithm 2. A formal statement is included in Appendix: Sec. 6.1.
To eliminate redundant computation, on keyframes, we precompute forward and backward warped feature maps ff , f b corresponding to each subsequent intermediate frame, {Ik+1, ..., Ik+n−1}. For keyframe interval n, this amounts to n− 1 forward and n− 1 backward warped feature maps.
3.4.1 FEATURE FUSION
We consider several possible fusion operators: max fusion, average fusion, and convolutional fusion (Feichtenhofer et al. (2016)). We implement max and average fusion by aligning the input feature maps ff , f b ∈ R1×C×h×w along the channel dimension, and computing a max or average across each pixel in corresponding channels, a parameter-free operation. We implement conv fusion by stacking the input feature maps along the channel dimension [ff , f b]C = fs ∈ R1×2C×h×w, and applying a bank of learned, 1x1 conv filters to reduce the channel dimensionality by a factor of two.
Before applying the fusion operator, we weight the two input feature maps ff , f b by scalars α and 1 − α, respectively, that correspond to feature relevance, a scheme that works very effectively in practice. For keyframe interval n, and a frame at offsets p and n − p from the previous and next keyframes, respectively, we set α = n−pn and 1 − α = p n , thereby penalizing the input features warped farther from their keyframe. Thus, when p is small relative to n, we weight the previous keyframe’s features more heavily, and vice versa. In summary, the features for intermediate frame Ii are set to: fi = F (n−pn f f , pnf b), where p = i mod n. This scheme is reflected in Alg. 2.
4 EXPERIMENTS
4.0.1 DATASETS
We train and evaluate our system on CamVid (Brostow et al. (2009)) and Cityscapes (Cordts et al. (2016)), two popular, large-scale datasets for complex urban scene understanding. CamVid consists
of over 10 minutes of footage captured at 30 fps and 960 × 720 pixels. Cityscapes consists of 30- frame video snippets shot at 17 fps and 2048 × 1024 pixels. On CamVid, we adopt the standard train-test split of Sturgess et al. (2009). On Cityscapes, we train on the train split and evaluate on the val split, following the example of previous work (Yu et al. (2017); Chen et al. (2017); Zhu et al. (2017)). We use the standard mean intersection-over-union (mIoU) metric to evaluate segmentation accuracy, and measure throughput in frames per second (fps) to evaluate inference performance.
4.0.2 ARCHITECTURE
For our segmentation network, we adopt a variant of the DeepLab architecture called Deformable DeepLab (Dai et al. (2017)), which employs deformable convolutions in the last ResNet block (conv5) to achieve significantly higher accuracy at comparable inference cost to a standard DeepLab model. DeepLab (Chen et al. (2017)) is widely considered a state-of-the-art architecture for semantic segmentation, and a DeepLab implementation currently ranks first on the PASCAL VOC object segmentation challenge (Aytar (2018)). Our DeepLab model uses ResNet-101 as its feature network, which produces intermediate representations fi ∈ R1×2048× h 16× w 16 . The DeepLab task network outputs predictions pi ∈ R1×C×h×w, where C is 12 or 20 for CamVid and Cityscapes respectively.
4.0.3 TRAINING
To train our single-frame DeepLab model, we initialize with an ImageNet-trained ResNet-101 model, and learn task-specific weights on the CamVid and Cityscapes train sets. To train our video segmentation system, we sample at random a labeled image from the train set, and select a preceding and succeeding frame to serve as the previous and next keyframe, respectively. Since motion estimation with block motion vectors and feature warping are both parameter-free, feature propagation introduces no additional weights. Training feature interpolation with convolutional fusion, however, involves learning weights for the 1x1 conv fusion layer, which is applied to stacked feature maps, each with channel dimension 2048. For both schemes, we train with SGD on an AWS EC2 instance with 4 Tesla K80 GPUs for 50 epochs, starting with a learning rate of 10−3.
4.1 RESULTS
4.1.1 BASELINE
For our accuracy and performance baseline, we evaluate our full DeepLab model on every labeled frame in the CamVid and Cityscapes test splits. Our baseline achieves an accuracy of 68.6 mIoU on CamVid, at a throughput of 3.7 fps. On Cityscapes, the baseline model achieves 75.2 mIoU, matching published results for the DeepLab architecture we used (Dai et al. (2017)), at 1.3 fps.
4.1.2 PROPAGATION AND INTERPOLATION
In this section, we evaluate our two main contributions: 1) feature propagation with block motion vectors (prop-BMV), and 2) feature interpolation, our new feature estimation scheme, implemented with block motion vectors (inter-BMV). We compare to the closest available existing work on the problem, a feature propagation scheme based on optical flow (Zhu et al. (2017)) (prop-flow). We evaluate by comparing accuracy-runtime curves for the three approaches on CamVid and Cityscapes (see Fig. 3). These curves are generated by plotting accuracy against throughput at each keyframe interval in Appendix: Tables 4 and 5, which contain comprehensive results.
First, we note that block motion-based feature propagation (prop-BMV) outperforms optical flowbased propagation (prop-flow) at all but the lowest throughputs. While motion vectors are slightly less accurate than optical flow in general, by cutting inference times by 53% on intermediate frames (Sec. 3.3.1), prop-BMV enables operation at much lower keyframe intervals than optical flow to achieve the same inference rates. This results in a much more favorable accuracy-throughput curve.
Second, we find that our feature interpolation scheme (inter-BMV) strictly outperforms both feature propagation schemes. At every keyframe interval, inter-BMV is more accurate than prop-flow and prop-BMV; moreover, it operates at similar throughput to prop-BMV. This translates to a consistent advantage over prop-BMV, and an even larger advantage over prop-flow (see Fig. 3). On CamVid, inter-BMV actually registers a small accuracy gain over the baseline at keyframe intervals 2 and 3, utilizing multi-frame context to improve on the accuracy of the single-frame DeepLab model.
Metrics. We also distinguish between two metrics: the standard average accuracy, results for which are plotted in Fig. 3, and minimum accuracy, which is a measure of the lowest frame-level accuracy an approach entails, i.e. accuracy on frames farthest away from keyframes. Minimum accuracy is the appropriate metric to consider when we wish to segment a video as efficiently as possible, while ensuring that all frame segmentations meet some threshold level of accuracy. As an example, at an accuracy target of 65 mIoU, feature interpolation enables operation at 20.1 fps on CamVid (see Table 4). This is 2.5× faster than achievable inference speeds with feature propagation alone, using either optical flow (8.0 fps) or block motion vectors (9.3 fps). In general, feature interpolation achieves over twice the throughput as Zhu et al. (2017) on CamVid and Cityscapes, at any target accuracy. Minimum accuracy plots (Fig. 5) are included in the Appendix.
Baseline. We also compare to our frame-by-frame DeepLab baseline, which offers low throughput but high average accuracy. As Figures 3a and 3b indicate, even at average accuracies above 68 mIoU on CamVid and 70 mIoU on Cityscapes, figures competitive with contemporary single-frame models (see Table 1 and Table 2), feature interpolation offers speedups of 4.5× and 4.2×, respectively, over the baseline. Notably, at key interval 3, interpolation obtains a 2.5× speedup over the baseline on CamVid, at slightly higher than baseline accuracy (see Fig. 3a and Table 4).
Delay. Recall that feature interpolation introduces a delay of keyframe interval n frames, which corresponds to n30 seconds at 30 fps. For example, at n = 3, inter-BMV introduces a delay of 3 30
seconds, or 100 ms. To put this in context, prop-flow (Zhu et al. (2017)) takes 125 ms to segment a frame at key interval 3, and inter-BMV takes 110 ms. Thus, by lagging by less than 1 segmentation, we are able to segment 2.5× more frames per hour than the frame-by-frame model (9.1 fps vs. 3.6 fps). This is a suitable tradeoff in almost all batch settings (e.g. training data generation, post-hoc video analysis), and in many interactive applications (e.g. video anomaly detection, film editing).
Fig. 4 depicts a qualitative comparison of interpolation and prop-flow (Zhu et al. (2017)).
4.1.3 FEATURE FUSION
In this second set of experiments, we evaluate the accuracy gain achieved by feature fusion, in order to isolate the contribution of fusion to the success of our feature interpolation scheme. As Table 3 demonstrates, utilizing any fusion strategy, whether max, average, or conv fusion, results in higher accuracy than using either input feature map alone. This holds true even when one feature map is significantly stronger than the other (rows 2-4), and for both short and long distances to the keyframes. This observed additive effect suggests that feature fusion is highly effective at capturing signal that appears in only one input feature map, and in merging spatial information across time.
5 CONCLUSION
We develop interpolation-BMV, a novel segmentation scheme that combines the use of block motion vectors for feature warping, bi-directional propagation to capture scene context, and feature fusion to produce accurate frame segmentations at high throughput. We evaluate on the CamVid and Cityscapes datasets, and demonstrate significant speedups across a range of accuracy levels, compared to both a strong single-frame baseline and prior work. Our methods are general, and represent an important advance in the effort to operate image models efficiently on video.
6 APPENDIX
6.1 SYSTEM DESIGN
We provide the formal statement of feature interpolation, Algorithm 2.
Algorithm 2 Feature interpolation with block motion vectors (inter-BMV) 1: input: video frames {Ii}, motion vectors mv, keyframe interval n 2: Wf ,Wb ← [] . forward, backward warped features 3: for frame Ii in {Ii} do 4: if i mod n == 0 then . keyframe 5: fi ← Nfeat(Ii) . curr keyframe features 6: Si ← Ntask(fi) 7: fi+n ← Nfeat(Ii+n) . next keyframe features 8: Wf ← PROPAGATE(fi, n− 1,−mv[i+ 1 : i+ n]) 9: Wb ← PROPAGATE(fi+n, n− 1,mv[i+ n : i+ 1]) 10: else . intermediate frame 11: p← i mod n . offset from prev keyframe 12: fi ← F (n−pn ·Wf [p], p n ·Wb[n− p]) . fuse propagated features 13: Si ← Ntask(fi) 14: end if 15: end for 16: output: frame segmentations {Si}
17: function PROPAGATE(features f , steps n, warp array g) . warp f for n steps with g 18: O ← [f ] 19: for i = 1 to n do 20: append(O,WARP(O[i− 1], g[i])) . warp features one step 21: end for 22: return O 23: end function
6.2 RESULTS
This appendix section includes full tabular results for the CamVid and Cityscapes datasets (Table 4 and Table 5) and minimum accuracy vs. throughput plots for CamVid and Cityscapes (Figure 5). | 1. What is the main contribution of the paper in accelerating semantic segmentation on videos?
2. What are the strengths of the proposed method, particularly in its novel approach to utilizing block motion vectors?
3. Are there any weaknesses or limitations in the paper's idea or experimental evaluation?
4. How does the reviewer assess the significance and impact of the paper's findings in the context of video semantic segmentation?
5. Can you identify any areas where further research or improvement could be explored? | Review | Review
# Paper summary
This paper advances a method for accelerating semantic segmentation on video content at higher resolutions. Semantic segmentation is typically performed over single images, while there is un-used redundancy between neighbouring frames. The authors propose exploiting this redundancy and leverage block motion vectors from MPEG H.264 video codec which encodes residual content between keyframes. The block motion vectors from H264 are here used to propagate feature maps from keyframes to neighbouring non-keyframe frames (in both temporal directions) avoiding thus an additional full forward pass through the network and integrate this in the training pipeline. Experimental results on CamVid and Cityscapes show that the proposed method gets competitive results while saving computational time.
# Paper strengths
- This paper addresses a problem of interest for both academic and industrial purposes.
- The paper is clearly written and the authors argument well their contributions, adding relevant plots and qualitative results where necessary.
- The two-way interpolation with block motion vectors and the fusion of interpolated features are novel and seem effective.
- The experimental results, in particular for the two-way BMV interpolation, are encouraging.
# Paper weaknesses
- The idea of using Block Motion Vectors from compressed videos (x264, xvid) to capture motion with low-cost has been previously proposed and studied by Kantorov and Laptev [i] in the context of human action recognition. Flow vectors are obtained with bilinear interpolation from motion blocks between neighbouring frames. Vectors are then encoded in Fisher vectors and not used with CNNs as done in this paper. In both works, block motion vectors are used as low-cost alternatives to dense optical flow. I would suggest to cite this work and discuss similarities and differences.
- Regarding the evaluation of the method, some recent methods dealing with video semantic segmentation, also using ResNet101 as backbone, are missing, e.g. low latency video semantic segmentation[ii]. Pioneer Clockwork convnets are also a worthy baseline in particular in terms of computational time (results and running times on CityScapes are shown in [ii]). It would be useful to include and compare against them.
- In Section 4.1.2 page 7 the authors mention a few recent single-frame models ((Yu et al. (2017); Chen et al. (2017); Lin et al. (2017); Bilinski & Prisacariu (2018)) as SOTA methods and the current method is competitive with them. However I do not see the results from the mentioned papers in the referenced Figures. Is this intended?
- On a more general note related to this family of approaches, I feel that their evaluation is usually not fully eloquent. Authors compare against similar pipelines for static processing and show gains in terms of computation time. The backbone architecture, ResNet-101 is already costly for high-resolution inputs to begin with and avoiding a full-forward pass brings quite some gains (though a part of this gain is subsequently attenuated by the latency caused by the batch processing of the videos). There are recent works in semantic segmentation that focus on architectures with less FLOPs or memory requirements than ResNet101, e.g. Dilated ResNets [iii], LinkNet[iv]. So it could be expected that image-based pipelines to be getting similar or better performance in less time. I expect the computational gain on such architectures when using the proposed video processing method to be lower than for ResNet101, and it would make the decision of switching to video processing or staying with frame-based predictions more complex.
The advantage of static image processing is simpler processing pipelines at test time without extra parameters to tune. It would be interesting and useful to compare with such approaches on more even grounds.
# Conclusion
This paper takes on an interesting problem and achieves interesting results. The use of Block Motion Vectors has been proposed before in [i] and the main novelty of the paper remains only the interpolation of feature maps using BMVC. The experimental section is missing some recent related methods to benchmark against.
This work has several strong and weak points. I'm currently on the fence regarding my decision. For now I'm rating this work between Weak Reject and Borderline
# References
[i] V. Kantorov and I. Laptev, Efficient feature extraction, aggregation and classification for action recognition, CVPR 2014
[ii] Y. Li et al., Low-Latency Video Semantic Segmentation, CVPR 2018
[iii] F. Yu et al., Dilated Residual Networks, CVPR 2017
[iv] A. Chaurasia and E. Culurciello, LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation, arXiv 2017 |
ICLR | Title
Masked Vision and Language Modeling for Multi-modal Representation Learning
Abstract
In this paper, we study how to use masked signal modeling in vision and language (V+L) representation learning. Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality. This is motivated by the nature of image-text paired data that both of the image and the text convey almost the same information but in different formats. The masked signal reconstruction of one modality conditioned on another modality can also implicitly learn cross-modal alignment between language tokens and image patches. Our experiments on various V+L tasks show that the proposed method, along with common V+L alignment losses, achieves state-of-the-art performance in the regime of millions of pre-training data. Also, we outperforms the other competitors by a significant margin in limited data scenarios.
1 INTRODUCTION
Vision and language (V+L) representation learning has gained significant attention due to the transferablility of the representations to a diverse set of downstream tasks such as zero- or few-shot visual recognition (Jia et al., 2021; Radford et al., 2021; Tsimpoukelli et al., 2021), object detection (Cai et al., 2022; Kamath et al., 2021), information retrieval (Li et al., 2022; 2021), and multi-modal generation (Ramesh et al., 2022; 2021) etc. This success is mainly driven by large-scale pre-training with paired image and text data. The current V+L pre-training techniques particularly focus on the representation learning that characterizes the association between vision and language, and they are largely inspired by self-supervised learning techniques (Devlin et al., 2018; He et al., 2020) in uni-modal learning.
Masked signal modeling is a popular self-supervisory pre-training task (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022), which aims at reconstructing the masked signals from the unmasked ones. It has been independently explored in the domains of natural language processing (NLP) and computer vision (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022). For example, in the domain of NLP, BERT (Devlin et al., 2018) and several follow-up works (Liu et al., 2019; Yang et al., 2019) utilize masked language modeling (MLM) where the model is expected to predict the masked text tokens using unmasked tokens. They have shown that MLM leads to powerful generalization performance across diverse NLP tasks. In computer vision, as shown in Figure 1 (top-left), the masked image modeling (MIM) is to predict masked pixels or image patches using unmasked portions of the images. MIM has shown to be an effective pre-training task for learning visual representations (Bao et al., 2021; Xie et al., 2021; He et al., 2022).
While MLM and MIM have been actively explored in each domain, existing works do not fully leverage the masked multi-modal signal modeling in the domain of V+L. For example, as shown in Figure 1 (bottom-left), several approaches rely only on MLM with unmasked images and do not model the masked images (Duan et al., 2022; Li et al., 2022; 2021; 2019; Yang et al., 2022). In this case, the distribution of text given image, p(T |I), can be learned, but the distribution of image given text, P (I|T ), cannot. This will potentially lead to biased performance in cross-modal retrieval tasks such as image-to-text or text-to-image retrieval as shown in our experiments. Although there
exist works that use both modality signals masked, they either use a frozen object detector to extract region-based visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019) or mask the image tokens from a pre-trained image tokenizer instead of the raw RGB pixels (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022). These frozen object detector and image tokenizer not only require additional data for training but also prevent the V+L interactions from being learned end-to-end.
In this paper, we propose joint masked V+L modeling where the original signal is reconstructed by using its masked input and the corresponding unmasked input from the other modality. As illustrated in Figure 1 (right part), although we exploit random masking, the dog face in the image can be used to predict the masked text token “dog” and the text “green ball” can be used to reconstruct the corresponding masked patches in the image. To ensure that the model uses information from both modalities, we explicitly enforce the model to utilize cross-attention to generate the joint representations. Compared with the aforementioned existing works, our approach models both conditional distributions, p(I|T ) and p(T |I). Also, the model is trained end-to-end, without frozen bottleneck components that disturb learning interactions between V+L. By reconstructing one modality signal from the corresponding the other modality signal (e.g. reconstructing the text “dog” from the visual signals of dog face), the model implicitly learns the alignment between V+L. In addition, we observe that the model trained for the joint masked V+L modeling becomes noticeably effective when the training data is limited. Overall, our contributions are summarized as below:
1. We propose a joint masked V+L modeling task. We show that models pre-trained with the proposed task, along with common V+L alignment tasks such as image-text matching, achieves state-of-the-art performance on a broad rage of V+L tasks.
2. We provide a probabilistic interpretation of the proposed method and highlight the difference between ours and existing approaches in terms of the V+L joint distribution estimation.
3. We achieve significantly better performance than other V+L models in the regimes of limited training data, and only ∼40% of data used by the state-of-the-art models is sufficient to match their performance.
2 RELATED WORK
Vision and language representation learning The methods in V+L representation learning can be categorized based on how the information is fused between the modalities to obtain the joint representations. We group the fusion techniques into three categories: 1) transformers with attention across modalities, 2) contrastive learning with a large-scale pre-training data, 3) a hybrid form of learning with cross-attention and a contrastive loss. The attention across modalities has been widely used with image features extracted from off-the-shelf object detectors and text features obtained from transformer encoders (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Zhang et al., 2021; Li et al., 2020b; Su et al., 2019; Li et al., 2019). While cross-attention effectively aligns V+L representations, it is computationally expensive since all possible pairs of images
and texts need to be processed. On the contrary, the authors in (Jia et al., 2021; Radford et al., 2021; Mokady et al., 2021; Shen et al., 2021; Yuan et al., 2021) show that contrastive learning with uni-modal encoders and millions of image-text pairs can achieve powerful zero-shot performance in diverse V+L tasks. The contrastive learning-based approaches do not rely on computationally expensive cross-attention but require an excessively large amount of training data. Hence, a combination of contrastive loss and cross-attention is utilized by complementing limitations of both approaches in (Li et al., 2021; 2022; Yang et al., 2022; Duan et al., 2022). In particular, only image and text pairs that result in high similarity by uni-modal encoders are processed using the cross-attention layers to reduce the computational burden and improve the alignment.
Masked signal modeling is a commonly used pre-training objective in the aforementioned V+L models. It has been independently explored in each of vision and language domain. In the NLP domain, BERT and its variants (Devlin et al., 2018; Liu et al., 2019) achieve representations that can generalize to a broad range of NLP tasks through MLM. Autoregressive language models (Radford et al., 2018; 2019) which predict masked future tokens have shown to be effective self-supervised learners. The success of the language models leads to several MIM techniques. BeiT (Bao et al., 2021) is trained to recover masked visual tokens which are obtained by a discrete variational autoencoder (dVAE). In SimMIM (Xie et al., 2021) and MAE (He et al., 2022), transformers are trained to recover masked patches in an end-to-end fashion. The authors in (Chen et al., 2020a) propose to autoregressively predict the unknown pixels to learn visual representations. In (Bachmann et al., 2022), multiple vision modality data are masked and reconstructed to learn visual representations. In the domain of V+L learning, (Arici et al., 2021) explores MIM and MLM for catalog data with short text attributes. V+L models with an object detector often aim at recovering only bounding box visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Su et al., 2019). Several V+L models focus on predicting future text tokens without MIM (Wang et al., 2021; Yu et al., 2022; Alayrac et al., 2022). While both MIM and MLM are explored in (Geng et al., 2022), the trained model is evaluated only on vision tasks. In (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022; Wang et al., 2022), image tokens defined by image tokenizers trained with additional 250 million images (Ramesh et al., 2021) or distillation from the CLIP model (Radford et al., 2021) trained with 400 million image-text pairs (Peng et al., 2022) are reconstructed. In our work, we eliminate these model and data dependencies, by directly recovering RGB pixels and text tokens from masked image patches and masked text tokens. Therefore, MIM and MLM are seamlessly integrated to achieve generalizable V+L representations within a simple training framework.
3 METHOD
Our method has two types of pre-training objectives, which are 1) masked vision and language modeling and 2) multi-modal alignment. We explain each pre-training objective in this section.
3.1 MASKED VISION AND LANGUAGE MODELING
The overall framework of masked vision and language modeling is shown in Figure 2. We use transformer-based encoders (Vaswani et al., 2017) for both image and text streams. Given an imagetext pair (I, T ), an image encoder, fim, is used to extract features, v = {vcls, v1, ..., vN}, from the image input I . N is the number of image patches and vcls is the encoding of the image class token, [CLS]. The text encoder, ftxt, extracts features, w = {wcls, w1, ..., wM}, from the text input, T . M is the number of text tokens and wcls is the encoding of the start token of a sentence, [START]. The image and the text encoder consist of 12 self-attention blocks as shown in Figure 3 (a). The image and the text features are further processed by image and text cross-modality encoders. The cross-modality encoders have 3 cross-attention blocks as illustrated in Figure 3 (b). The image (text) cross-modality encoder uses text (image) features to generate attentions. These cross-modality encoders can enhance the representation of one modality by interacting with another modality (Lu et al., 2020; Tan & Bansal, 2019).
Image and Text Masking: For text masking, we follow BERT (Devlin et al., 2018) with minor modifications. In BERT, the original tokens are replaced with either the [MASK] token or random tokens. We use only the [MASK] token to replace tokens to be masked (Wettig et al., 2022). For image masking, we follow (He et al., 2022; Xie et al., 2021) and use random masking of raw image patches with a masking patch size of 32× 32. Given that 224× 224 images are divided into 16× 16 patches for the image encoder, the large masking patch prevents the model from simply copying their neighborhood pixels for reconstruction (Xie et al., 2021).
Joint Reconstruction: We reconstruct the original signals of one modality from its masked input conditioned on the unmasked input of the other modality. Specifically, an original image, I , and a masked text, Tm, are used to reconstruct an original text, T , and similarly a masked image, Im, and an original text, T , are used to reconstruct the original image, I . For image reconstruction, (Im, T ) is first given to the image and the text encoders to obtain masked image features, vm, and unmasked text features, w. Following (Xie et al., 2021), we use both masked and unmasked patches to obtain vm. (vm,w) are further processed by the image cross-modality encoder, gim, where w is used to compute cross-attentions. The output of gim is mapped back to the original RGB image space by an image cross-modality decoder, gdeim, which consist of 3 cross-attention blocks followed by a fully connected layer (FC). Although existing work exploits a light-weight transformer decoder with only self-attention (He et al., 2022) or a simple linear mapping (Xie et al., 2021) for the image decoder, we use joint information between modalities to allow further interactions in decoding. For masked text reconstruction, a token classifier, gdetxt, which consists of a FC followed by softmax is applied to the output of the text cross-modality encoder, gtxt, for the token prediction. The masked V+L modeling loss, LMVLM , is defined as
LMVLM = E(I,T )∼D[H(yMT , ϕMtxt(I, Tm))︸ ︷︷ ︸ MLM + 1 Ω(IM ) ∥IM − ϕMim(Im, T )∥1︸ ︷︷ ︸
MIM
] , (1)
where ϕtxt = gdetxt(gtxt(fim(I), ftxt(Tm))) and ϕim = g de im(gim(fim(Im), ftxt(T ))). The loss is computed only for masked pixels and text tokens. Hence, the superscriptM denotes data or features correspond to the masked signals. A pair of I and T is sampled from the training dataset D, H denotes cross-entropy, and yMT is a matrix that contains one-hot row vectors for the ground truth of masked text tokens. Ω(·) is the number of pixels. When minimizing LMVLM , the model is enforced to reconstruct the original signals by attending to the other modality signals. Cross-attending for reconstruction enables the model to learn interaction between V+L modalities.
3.2 MULTI-MODAL ALIGNMENT
In addition to the masked signal modeling tasks, we adopt two additional tasks to explicitly learn multi-modality alignment. The first one is an image-text contrastive (ITC) learning (Radford et al., 2021; Jia et al., 2021). For the k-th pair of image and text features out of the image and text encoders, two separate FC layers are used to project the image [CLS] token features and the text [START] token features to the same dimensional feature space with unit norm, zkim and z k txt, respectively. The loss, LITC is computed as
LITC = − 1
N N∑ k=1
[ log
exp(zkim · zktxt/τ)∑N n=1 exp(z k im · zntxt/τ) + log exp(zkim · zktxt/τ)∑N n=1 exp(z n im · zktxt/τ)
] , (2)
where N and τ are the batch size and the temperature scaling parameter, respectively. The second task is an image-text matching (ITM) (Chen et al., 2020b; Li et al., 2021; 2020b), predicting whether an image and a text are aligned or not. The [CLS] and [START] token features from the image and text cross-modality encoders are zcrossim and z cross txt , respectively. To fuse these two features, we compute the element-wise product of zcrossim and z cross txt (z cross im ∗ zcrosstxt ), and a FC layer followed by softmax is applied to obtain the final prediction (Lu et al., 2019). For training, we use yITM = 1, when zcrossim and z cross txt are a pair. Otherwise, yITM = 0. The loss, LITM , is defined as
LITM = E(I,T )∼D[H(yITM , gitm(zcrossim ∗ zcrosstxt ))]. (3) Following (Li et al., 2021), we sample in-batch hard negatives based on the distribution of the cosine similarity between zim and ztxt. The overall pre-training loss, L, is defined as L = LMVLM + LITC+LITM .We term our model trained with loss L as MaskVLM (Masked Vision and Language Modeling).
3.3 PROBABILISTIC INTERPRETATION
We differentiate MaskVLM from the existing V+L models using masked signal modeling from a perspective of likelihood estimation. The training objective of masked signal modeling on unimodal signals, X , focuses on learning the data distribution p(X) which is formulated by the law of total probability as p(X) = ∑ Xm∈MX p(Xm) · p(X|Xm), where Xm is an instance of masked signal from the set of all possible masked signals, MX . MIM or MLM learns the data distribution by maximizing ∑ Xm∈MX p(X|Xm) (Bengio et al., 2013).
In V+L representation learning, the ultimate goal is to learn the joint distribution of multi-modal signals, p(I, T ). However, the authors in (Sohn et al., 2014) pointed out that directly maximizing the likelihood for the joint distribution is challenging because of the heterogeneous multimodal data distributions. Instead, they show minimizing variation of information defined as −E(I,T )∼D(log p(I|T ) + log p(T |I)) is sufficient to estimate the joint distribution. From a perspective of variation of information, the limitations in existing works can be better understood. Several existing works attempted to approximate the joint distribution using MLM with unmasked image (Duan et al., 2022; Li et al., 2021; 2019; Yang et al., 2022). In other words, p(T |I, Tm) is maximized to learn the conditional distribution, p(T |I), but p(I|T ) is not modeled. In other existing works (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019), where both modalities are masked, the visual masking is limited to mask the visual features extracted from a frozen object detector, ψ(·), instead of the raw image pixels. In this case, the distributions p(ψ(I)|T ) and p(T |ψ(I)) are modeled instead of p(I|T ) and p(T |I). This frozen feature extractor can bottleneck the direct estimation of the underlying data distribution. MaskVLM is trained endto-end to estimate both conditional distributions, p(I|T ) and p(T |I), which directly minimizes the variation of information. We hypothesize this modeling of conditional distributions for both modalities could lead to superior performance in both large-scale and limited data training scenarios, which we empirically demonstrated in Section 4.
4 EXPERIMENTS
4.1 PRE-TRAINING DATASETS AND DOWNSTREAM TASKS
We use the union of four datasets for pre-training so that we can perform a fair comparison with existing state-of-the-art methods (Chen et al., 2020b; Li et al., 2021). These datasets are Conceptual Captions (CC) (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2017), and COCO Captions (Lin et al., 2014). While VG and COCO contain captions annotated by humans, CC and SBU Captions are automatically collected from the web. The total number of unique images and image-text pairs in the four datasets are 4.1M and 5.2M, respectively. We term this pre-training dataset as the 4M dataset. We validate the pre-trained model on the following four downstream tasks:
Image-Text Retrieval: We perform text-to-image and image-to-text retrieval. We use the ITC and ITM losses of Section 3.2 for finetuning and the finetuned models are evaluated on COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015). In addition, since COCO is used for pre-training, zero-shot retrieval performance is reported on Flickr30k. In (Li et al., 2021), the model finetuned on COCO is used for the zero-shot evaluation on Flickr30k. Although it may result in better performance, we believe that using the finetuned model does not validate the zero-shot capability of the
pre-trained model. Therefore, we use the pre-trained model directly for zero-shot evaluation. Following (Li et al., 2021), we first retrieve top-k candidates using the similarity scores from the image and the text encoders. The top-k candidates are further processed by the cross-modality encoders to obtain the final retrieval results.
Visual Question Answering (VQA): Here, given an image and a question pair, the model should generate a correct answer. The model is evaluated on VQA v2 (Goyal et al., 2017). We adopt the answer generation framework (Cho et al., 2021) and finetune the base model with a fusion encoder and an answer decoder. The model architectures are visualized in Figure 6 (a) of Appendix. The fusion encoder consists of one cross-attention block shown in Figure 3 (b). The output from the text cross-modality encoder is used as queries, and the image cross-modality encoder output is utilized to create attentions in the fusion encoder. The architecture of the answer decoder is the same as that of the text cross-modality encoder, but it is trained with a language modeling loss to generate the answers. Specifically, the output of the fusion encoder is used for computing attentions and the answer tokens are autoregressively predicted. During inference, [START] token is used as an initial token to generate following answer tokens.
Natural Language for Visual Reasoning (NLVR): This tasks involves a binary classification with a triplet, (text, image1, image2). The goal here is to predict whether the text describes the pair of images. For finetuning, we feedforward (text, image1) and (text, image2) separately to extract the features as shown in Figure 6 (b). The [CLS] token features of image1 and image2 from the image encoder are denoted as v1 and v2, respectively. The [START] token text features from the text encoder is w. These features are processed by the cross-modality encoders. The outputs of the image and text cross-modality encoders are fused by element-wise multiplication. The fused features for both images are concatenated, and a classifier with two linear layers predicts whether the text is aligned with the image pair or not. NLVR2 (Suhr et al., 2018) is used for the evaluation.
Visual Entailment (VE): Given an image text pair, the task is to classify the relationship between the image and the text into one of three categories: entailment, neutral, and contradictory. The element-wise product of the output from the image and the text cross-modality encoders is forwarded to a classifier of two linear layers for prediction. SNLI-VE (Xie et al., 2019) is used for evaluation.
4.2 IMPLEMENTATION DETAILS
We use a Visual Transformer (ViT) (Dosovitskiy et al., 2020) pre-trained on ImageNet (Deng et al., 2009) and a pre-trained RoBERTa from (Liu et al., 2019) to initialize the image and the text encoder, respectively. We pre-train the model for 50 epochs when the 4M dataset is used and 30 epochs for all other experiments. A batch size of 512 is used with 16 NVIDIA Tesla V100 GPUs. All parameters are optimized using AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05. Following (Xie et al., 2021), we use the image masking ratio of 60%. While 15% masking ratio is used for text in language models (Devlin et al., 2018; Liu et al., 2019), we use 30% since the paired image can provide additional information for text reconstruction. During pre-training, the learning rate is warmed up to 3×10−4 in the first 5 epochs and decayed to 3×10−5 using a cosine scheduler. The learning rates for the image encoder and the text encoder are set to 10−5, which is less than that of the cross-modality encoders. An image size of 224 × 224 and RandAugment (Cubuk et al.,
2020) are used. During finetuning, the image is resized to 384× 384 and the positional encoding is interpolated following (Dosovitskiy et al., 2020). More details can be found in Appendix.
4.3 EVALUATION ON IMAGE-TEXT RETRIEVAL, VQA, NLVR, AND VE
We compare the finetuned image-text retrieval performance of the proposed MaskVLM with the state-of-the-art methods in Table 1. The second column is the number of unique images used for pretraining and the retrieval performance is evaluated in terms of Recall@k (R@k). We do not directly compare with ALIGN (Jia et al., 2021) since it is trained with more than 300 times of data used for MaskVLM. However, we still highlight the small performance gap between MaskVLM and ALIGN. We achieve the best performance in all Recall@k metrics on both COCO and Flickr30k except for the image retrieval R@10 and text retrieval R@5 on Flickr30k. Compared to ALIGN, we even achieve higher R@1 for image retrieval on COCO and text retrieval on Flickr30k. Table 2 shows the zero-shot retrieval performance of the state-of-the-art methods on Flickr30k. MaskVLM achieves a significant improvement over the second best method, ALBEF (Li et al., 2021), by 6.8 points at R@1 for image retrieval. Given that ALBEF is not trained for MIM, we hypothesize that ALBEF achieves the biased performance for text retrieval and MaskVLM achieves the significant improvement in image retrieval by additional MIM which models p(I|T ). While FLAVA exploits both MLM and MIM with the pre-trained image tokenizer, using 13 times more data than MaskVLM, MaskVLM still outperforms FLAVA by 9.8 and 19.3 points at R@1 for image and text retrieval respectively. Compared with CLIP (Radford et al., 2021) which is trained with at least 76 times more data than MaskVLM, we still achieve higher R@1 for image retrieval by 6.3 points. In general, MaskVLM achieves state-of-the-art performance in both finetuning and zero-shot experiments.
We report the accuracies on VQA, NLVR, and VE in Table 3. Except for SimVLM whose pretraining data is more than 300 times larger than that of MaskVLM, we consistently achieve the best
performances in all these tasks except for the validation split of NLVR2. In particular, MaskVLM is better than the second best method by 0.43, 1.14, and 0.27 on the test splits of VQA, NLVR2, and SNLI-VE, respectively. Compared to the base model of SimVLM, we narrow the accuracy gaps to 2.74% and 3.48% in test-std and test splits of VQA and VE, respectively. MaskVLM achieves higher accuracy than SimVLMbase in the test split of NLVR2 by 0.21%.
4.4 EVALUATION WITH LIMITED PRE-TRAINING DATA
We highlight the performance of MaskVLM in limited data scenarios. In particular, we create three subsets of the 4M pre-training data by sampling 50%, 25%, and 10% of CC and combining them with COCO. The number of imagetext pairs in each subset is around 39%, 25%, and 16% of the 4M pre-training data which contain 5.2M pairs, respectively. We pre-train models with these subsets of the data and analyze the downstream task performance in comparison with state-of-the-art methods. The results are reported in Table 4. Particularly, image and text retrieval R@1 performance on COCO is also visualized in Figure 4. We compare MaskVLM with
the most recent state-of-the-art methods, which are ALBEF (Li et al., 2021) and Codebook (Duan et al., 2022). In Table 4, as the size of pre-training data becomes smaller from CC 50% + COCO to CC 10% + COCO, the performance gap between MaskVLM and ALBEF increases from 6.39 to 8.71 at R@1 in COCO image retrieval (IR), 7.46 to 9.04 at R@1 in COCO text retrieval (TR), 1.17 to 1.79 in VQA and 0.31 to 1.24 in the test set of SNLI-VE. In NLVR2 and VQA, MaskVLM trained with CC 10% + COCO achieves higher accuracy than ALBEF trained with CC50% + COCO, which contains more than twice of image-text pairs in CC 10% + COCO. In Figure 4, while Codebook shows competitive recall performance compared to the MaskVLM with the 4M dataset ( 5.2M pairs), the R@1 differences in image and text retrieval, respectively, increase from 1.4 and 1.0 in the 4M dataset to 3.15 and 3.80 in CC 50% + COCO. Our model trained with CC25% + COCO outperforms Codebook trained with CC50% + COCO by 1.90 and 2.76 points in terms of image and text retrieval R@1, respectively. Since one of the main differences in MaskVLM compared to ALBEF and Codebook is the additional MIM, we believe that joint modeling of V+L contribute to better performance in limited data scenarios.
4.5 ABLATION STUDY
We perform an ablation study using different combinations of loss components to highlight the contribution of masked V+L modeling. We compare six models with the same architecture but with different loss functions for pre-training. We pre-train all models on the CC 50% + COCO dataset and compare finetuned and zero-shot retrieval performance on Flickr30k in Table 5. We note that zero-shot evaluation of the MLM + MIM model cannot be performed because the FC layers to compute ITM and ITC are not trained during pre-training. ITC and ITM are closely related to the retrieval task since they are used for finetuning and measuring the similarity between images and texts. However, MLM + MIM still achieves significantly better finetuned and zero-shot performance than ITC, which shows that MLM + MIM alone learns meaningful V+L representations. Compared to ITC+ ITM in the finetuned performance, ITC + ITM + MLM achieves slightly improved R@1
by 0.38 in image retrieval and degraded R@1 by 0.3 in text retrieval. When MIM alone is used with ITC + ITM as well, finetuned R@1 is improved by 0.16 and degraded by 0.8 for image and text retrieval, respectively, over ITC + ITM. On the other hand, when ITC + ITM + MLM + MIM is used, the model achieves significant improvement of finetuned performance over ITC + ITM + MLM by 0.92 and 2.10 for R@1 image and text retrieval, respectively. ITC + ITM + MLM + MIM also obtains the best performance in zero-shot retrieval. This result further supports the advantage of joint modeling for masked V+L signals.
4.6 QUALITATIVE RESULTS
We perform a qualitative analysis to show the role of multi-modal information in the reconstruction of masked signals from our model. To be specific, we illustrate the prediction of masked text tokens with and without the corresponding images. This illustration highlights how MaskVLM effectively utilizes both modality information to complete the masked signal modeling task. Figure 5 shows the reconstruction of masked texts using original images (“Recon (org)”) and masked images (“Recon (mask)”). In the first top example, when the model is given a masked text and a masked image which does not contain the “dog”, the reconstruction is performed by using only available information such as image patches of “green grass”. Thus, the prediction is limited to “a brown of motion” or “the green forest”. However, when the original image is used for reconstruction, both “a brown and white dog” and “white fence” are reconstructed by accurately attending to the image. In the bottom example, the visible patches of the masked image contain a few people, but lack background information. Consequently, the reconstruction with the masked image does not contain any background information but the background “cave” is reflected in the reconstruction with the original image. Theses examples confirm that MaskVLM has learned to perform masked signal modeling using both V+L information.
5 CONCLUSION
We propose masked vision and language modeling as a pre-training task for learning V+L representations. We provide a probabilistic interpretation to highlight the contribution of the proposed method and validate its advantages in both large-scale and limited data regimes. We consistently achieve the state-of-the-art performance in a broad range of V+L tasks.
ACKNOWLEDGMENTS
We thank Jiali Duan for providing results of the Codebook (Duan et al., 2022) with limited pretraining data in Figure 4.
A APPENDIX
A.1 DETAILS ON FINETUNING FOR DOWNSTREAM TASKS
We explain implementation details for each of the downstream tasks. For all the downstream tasks, we use AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05 and the cosine scheduler. An image size of 384 × 384 with RandAugment (Cubuk et al., 2020) is utilized and the positional encoding is interpolated following (Dosovitskiy et al., 2020). Except for the VQA task, we use the model achieves the best performance in the validation set to report the performance on the test set. We use the last epoch model for the VQA evaluation.
Image-Text Retrieval: COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015) are used to report the performance. To be specific, we follow data splits proposed in (Karpathy & Fei-Fei, 2015) and an average recall over image and text retrieval is used to find the best model in the validation set. The pre-trained model is finetuned for 15 epochs with a batch size of 256 and a learning rate of 1× 10−5. Visual Question Answering (VQA): For a fair comparison with existing methods (Chen et al., 2020b; Li et al., 2021), we use training and validation sets from VQA v2.0 (Goyal et al., 2017) with a subset of VQA samples from Visual Genome (Krishna et al., 2017) for training. Also, we report performance on both test-dev and test-std splits of VQA v2.0. Following (Li et al., 2021), we weight the loss for each answer based on its occurrence among all the answers. The model is finetuned for 15 epochs with a batch size of 256 . We use a learning rate of 2 × 10−5 for the image and the text cross-modality encoders, the fusion encoder, and the answer decoder. For the image and the text encoders, a learning rate of 1 × 10−5 is used. The fusion encoder and the answer decoder are initialized by the last and all three blocks of the pre-trained text cross-modality encoder, respectively.
Natural Language for Visual Reasoning (NLVR): Data splits proposed in (Suhr et al., 2018) are used for finetuning and evaluation. The model is finetuned for 5 epochs with a batch size of 128. Since the classifier is newly added after finetuning, we use a learning rate of 1 × 10−4 for the classifier and 1 × 10−5 for the remaining parts of the model. Different from (Duan et al., 2022; Li et al., 2021; Yang et al., 2022), where the models require additional text-assignment pre-training step before finetuning, we directly finetune for simplicity.
Visual Entailment (VE): We follow data splits proposed in SNLI-VE (Xie et al., 2019). We finetune the model with a batch size of 256 for 5 epochs. Similar to the NLVR task, a learning rate of 1× 10−4 is used for the classifier and 1× 10−5 is used for the remaining parts of the model.
A.2 REPRODUCIBILITY
We add more details of MaskVLM for reproducibility. We used the ImageNet pretrained ViT (vit base patch16 224) from (Wightman, 2019) and the pre-trained RoBERTa (roberta-base) from Hugging Face (Wolf et al., 2020). The detailed model architectures are visualized in Figure 7. Following (Dosovitskiy et al., 2020), the image encoder uses layer normalization (Ba et al., 2016) before each multi-head attention block while the text encoder applies layer normalization after each multi-head attention block (post norm). For the image (text) cross-modality encoder, we adopt the post norm and use the outputs of the text (image) encoder as keys and values
to compute cross-attention. To compute MIM and MLM, the self-attention outputs of the masked image features, vm, is used as queries and the unmasked text features, w, are used as keys and values in the image cross-modality encoder. For the text cross-modality encoder, the masked text features, wm, are used as queries and the unmasked image features, v, are used as keys and values. To keep the framework simple, we do not use any loss weighting for each loss term and layer decay during finetuning.
A.3 ABLATION STUDY ON MASKING STRATEGIES
We study different masking strategies in computing MIM and MLM losses. In particular, we compare MaskVLM using one modality masked and the other modality unmasked for reconstruction (MaskVLM (one)) and MaskVLM using both modalities masked at the same time for reconstruction. We compare these two MaskVLM models with the state-of-the-art method, ALBEF in Table 6. We follow the experimental setup described in Section 4.1 and report the finetuning performance on image-text retrieval. The performance of masking one modality at a time (MaskVLM (one)) was slightly better than masking both modalities at the same time (MaskVLM (both)). However, we observed that both reconstruction strategies are still effective as they achieve higher R@1 for image and text retrieval in both COCO and Flickr30k compared to ALBEF.
A.4 ABLATION STUDY ON MASKING RATIO
We perform ablation study using different masking ratios for masked vision and language modeling. In particular, we pre-train MaskVLM with several combinations of image and text masking ratios on the CC 50% + COCO dataset and report the finetuned image-text retrieval performance on Flickr30k in Table 7. We also report an average of R@k for image and text retrieval. When only image masking ratio is changed in the first three rows of the table, the difference between the maximum and the minimum of the average recall is 0.26 for image retrieval and 0.10 for text retrieval. This shows that MaskVLM achieves stable performance across the tested image masking ratios. From
comparison between the second row and the last row, we observe that increasing the text masking ratio from 0.15 to 0.3 leads to higher recall performance for both image and text retrieval.
A.5 EVALUATION ON IMAGE RECOGNITION
We evaluate the image recognition performance of MaskVLM. Following CLIP (Radford et al., 2021), we perform image classification directly using the pre-trained MaskVLM on various image recognition datasets including UC Merced Land Use (Yang & Newsam, 2010), MIT-67 (Quattoni & Torralba, 2009), CUB-200 (Wah et al., 2011), Oxford Flowers (Nilsback & Zisserman, 2008), Caltech-256 (Griffin et al., 2007), and ImageNet-1K (Deng et al., 2009). We compare the Top1 accuracy of MaskVLM with ALBEF (Li et al., 2021) in Table 8. Both models are pre-trained with the 4M dataset. Since during pre-training stage, both MaskVLM and ALBEF were initialized with the ImageNet pre-trained weights, the evaluation on ImageNet is not strictly zero-shot but the evaluation on other datasets is. We formulate image classification as image-to-text retrieval where the similarity scores between a query image and all the class names are computed to retrieve top-1 class name. The similarity scores can be obtained using either ITC and ITM scores, and we report them separately. Also, the results of using prompt engineering as in CLIP are reported.
As shown in Table 8, MaskVLM consistently outperforms ALBEF across all the datasets. In particular, prompt engineering improves the average accuracy across all the datasets for MaskVLM but ALBEF achieves lower average accuracy with prompt engineering. This shows that MaskVLM can better align images with variants of text than ALBEF, which results in stronger V+L representations of MaskVLM for the image recognition task.
A.6 ADDITIONAL EXAMPLES FOR THE QUALITATIVE ANALYSIS
We present additional examples for the qualitative analysis of MaskVLM in Figure 8. Similar to Figure 5, masked text tokens are reconstructed with masked images (“Recon (mask)”) and original images (“Recon (org)”). We highlight that MaskVLM utilizes both V+L information to reconstruct the text which corresponds to the given image.
A.7 STATISTICS OF THE PRE-TRAINING DATASET
In Table 9, we report the statistics of the 4M pre-training dataset that MaskVLM is trained on. We note that some data urls provided in the web datasets can become invalid, which may lead to slightly different number of image-text pairs depending on when the datasets are downloaded. | 1. What is the focus and contribution of the paper on vision and language learning representations?
2. What are the strengths of the proposed approach, particularly in terms of its novelty and experimental results?
3. What are the weaknesses of the paper regarding its claims and comparisons with other works?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes the use of joint masking for vision and language for learning representations from text and images. The proposed approach is tested on multiple tasks and achieves state-of-the-art results especially in the case of limited training data.
Strengths And Weaknesses
Strengths The paper is well written and easy to follow.
An interesting approach based on jointly masking images and text for learning visual and text representations is presented.
Experiments on multiple tasks and datasets are presented. A detailed ablation study is also conducted.
Weaknesses.
In several occasions the authors claim that other methods have been trained on much larger datasets. As a consequence this makes comparison with these works problematic. Is it possible that these methods are trained on the same amount of data as the one used in this study? Alternatively, would it be possible that the proposed method is trained on much larger datasets?
In order to investigate the impact of the each loss term it would be desirable that only one of them is removed at a time. In other words, it would it would be very informative to also include results with the following losses (in Table 5): MLM + MIM + ITC and MLM + MIM + ITM.
Clarity, Quality, Novelty And Reproducibility
The paper is well written and the proposed method is novel enough. The authors do provide a lot of details about the training process, so it might be possible for the results to be reproduced. |
ICLR | Title
Masked Vision and Language Modeling for Multi-modal Representation Learning
Abstract
In this paper, we study how to use masked signal modeling in vision and language (V+L) representation learning. Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality. This is motivated by the nature of image-text paired data that both of the image and the text convey almost the same information but in different formats. The masked signal reconstruction of one modality conditioned on another modality can also implicitly learn cross-modal alignment between language tokens and image patches. Our experiments on various V+L tasks show that the proposed method, along with common V+L alignment losses, achieves state-of-the-art performance in the regime of millions of pre-training data. Also, we outperforms the other competitors by a significant margin in limited data scenarios.
1 INTRODUCTION
Vision and language (V+L) representation learning has gained significant attention due to the transferablility of the representations to a diverse set of downstream tasks such as zero- or few-shot visual recognition (Jia et al., 2021; Radford et al., 2021; Tsimpoukelli et al., 2021), object detection (Cai et al., 2022; Kamath et al., 2021), information retrieval (Li et al., 2022; 2021), and multi-modal generation (Ramesh et al., 2022; 2021) etc. This success is mainly driven by large-scale pre-training with paired image and text data. The current V+L pre-training techniques particularly focus on the representation learning that characterizes the association between vision and language, and they are largely inspired by self-supervised learning techniques (Devlin et al., 2018; He et al., 2020) in uni-modal learning.
Masked signal modeling is a popular self-supervisory pre-training task (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022), which aims at reconstructing the masked signals from the unmasked ones. It has been independently explored in the domains of natural language processing (NLP) and computer vision (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022). For example, in the domain of NLP, BERT (Devlin et al., 2018) and several follow-up works (Liu et al., 2019; Yang et al., 2019) utilize masked language modeling (MLM) where the model is expected to predict the masked text tokens using unmasked tokens. They have shown that MLM leads to powerful generalization performance across diverse NLP tasks. In computer vision, as shown in Figure 1 (top-left), the masked image modeling (MIM) is to predict masked pixels or image patches using unmasked portions of the images. MIM has shown to be an effective pre-training task for learning visual representations (Bao et al., 2021; Xie et al., 2021; He et al., 2022).
While MLM and MIM have been actively explored in each domain, existing works do not fully leverage the masked multi-modal signal modeling in the domain of V+L. For example, as shown in Figure 1 (bottom-left), several approaches rely only on MLM with unmasked images and do not model the masked images (Duan et al., 2022; Li et al., 2022; 2021; 2019; Yang et al., 2022). In this case, the distribution of text given image, p(T |I), can be learned, but the distribution of image given text, P (I|T ), cannot. This will potentially lead to biased performance in cross-modal retrieval tasks such as image-to-text or text-to-image retrieval as shown in our experiments. Although there
exist works that use both modality signals masked, they either use a frozen object detector to extract region-based visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019) or mask the image tokens from a pre-trained image tokenizer instead of the raw RGB pixels (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022). These frozen object detector and image tokenizer not only require additional data for training but also prevent the V+L interactions from being learned end-to-end.
In this paper, we propose joint masked V+L modeling where the original signal is reconstructed by using its masked input and the corresponding unmasked input from the other modality. As illustrated in Figure 1 (right part), although we exploit random masking, the dog face in the image can be used to predict the masked text token “dog” and the text “green ball” can be used to reconstruct the corresponding masked patches in the image. To ensure that the model uses information from both modalities, we explicitly enforce the model to utilize cross-attention to generate the joint representations. Compared with the aforementioned existing works, our approach models both conditional distributions, p(I|T ) and p(T |I). Also, the model is trained end-to-end, without frozen bottleneck components that disturb learning interactions between V+L. By reconstructing one modality signal from the corresponding the other modality signal (e.g. reconstructing the text “dog” from the visual signals of dog face), the model implicitly learns the alignment between V+L. In addition, we observe that the model trained for the joint masked V+L modeling becomes noticeably effective when the training data is limited. Overall, our contributions are summarized as below:
1. We propose a joint masked V+L modeling task. We show that models pre-trained with the proposed task, along with common V+L alignment tasks such as image-text matching, achieves state-of-the-art performance on a broad rage of V+L tasks.
2. We provide a probabilistic interpretation of the proposed method and highlight the difference between ours and existing approaches in terms of the V+L joint distribution estimation.
3. We achieve significantly better performance than other V+L models in the regimes of limited training data, and only ∼40% of data used by the state-of-the-art models is sufficient to match their performance.
2 RELATED WORK
Vision and language representation learning The methods in V+L representation learning can be categorized based on how the information is fused between the modalities to obtain the joint representations. We group the fusion techniques into three categories: 1) transformers with attention across modalities, 2) contrastive learning with a large-scale pre-training data, 3) a hybrid form of learning with cross-attention and a contrastive loss. The attention across modalities has been widely used with image features extracted from off-the-shelf object detectors and text features obtained from transformer encoders (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Zhang et al., 2021; Li et al., 2020b; Su et al., 2019; Li et al., 2019). While cross-attention effectively aligns V+L representations, it is computationally expensive since all possible pairs of images
and texts need to be processed. On the contrary, the authors in (Jia et al., 2021; Radford et al., 2021; Mokady et al., 2021; Shen et al., 2021; Yuan et al., 2021) show that contrastive learning with uni-modal encoders and millions of image-text pairs can achieve powerful zero-shot performance in diverse V+L tasks. The contrastive learning-based approaches do not rely on computationally expensive cross-attention but require an excessively large amount of training data. Hence, a combination of contrastive loss and cross-attention is utilized by complementing limitations of both approaches in (Li et al., 2021; 2022; Yang et al., 2022; Duan et al., 2022). In particular, only image and text pairs that result in high similarity by uni-modal encoders are processed using the cross-attention layers to reduce the computational burden and improve the alignment.
Masked signal modeling is a commonly used pre-training objective in the aforementioned V+L models. It has been independently explored in each of vision and language domain. In the NLP domain, BERT and its variants (Devlin et al., 2018; Liu et al., 2019) achieve representations that can generalize to a broad range of NLP tasks through MLM. Autoregressive language models (Radford et al., 2018; 2019) which predict masked future tokens have shown to be effective self-supervised learners. The success of the language models leads to several MIM techniques. BeiT (Bao et al., 2021) is trained to recover masked visual tokens which are obtained by a discrete variational autoencoder (dVAE). In SimMIM (Xie et al., 2021) and MAE (He et al., 2022), transformers are trained to recover masked patches in an end-to-end fashion. The authors in (Chen et al., 2020a) propose to autoregressively predict the unknown pixels to learn visual representations. In (Bachmann et al., 2022), multiple vision modality data are masked and reconstructed to learn visual representations. In the domain of V+L learning, (Arici et al., 2021) explores MIM and MLM for catalog data with short text attributes. V+L models with an object detector often aim at recovering only bounding box visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Su et al., 2019). Several V+L models focus on predicting future text tokens without MIM (Wang et al., 2021; Yu et al., 2022; Alayrac et al., 2022). While both MIM and MLM are explored in (Geng et al., 2022), the trained model is evaluated only on vision tasks. In (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022; Wang et al., 2022), image tokens defined by image tokenizers trained with additional 250 million images (Ramesh et al., 2021) or distillation from the CLIP model (Radford et al., 2021) trained with 400 million image-text pairs (Peng et al., 2022) are reconstructed. In our work, we eliminate these model and data dependencies, by directly recovering RGB pixels and text tokens from masked image patches and masked text tokens. Therefore, MIM and MLM are seamlessly integrated to achieve generalizable V+L representations within a simple training framework.
3 METHOD
Our method has two types of pre-training objectives, which are 1) masked vision and language modeling and 2) multi-modal alignment. We explain each pre-training objective in this section.
3.1 MASKED VISION AND LANGUAGE MODELING
The overall framework of masked vision and language modeling is shown in Figure 2. We use transformer-based encoders (Vaswani et al., 2017) for both image and text streams. Given an imagetext pair (I, T ), an image encoder, fim, is used to extract features, v = {vcls, v1, ..., vN}, from the image input I . N is the number of image patches and vcls is the encoding of the image class token, [CLS]. The text encoder, ftxt, extracts features, w = {wcls, w1, ..., wM}, from the text input, T . M is the number of text tokens and wcls is the encoding of the start token of a sentence, [START]. The image and the text encoder consist of 12 self-attention blocks as shown in Figure 3 (a). The image and the text features are further processed by image and text cross-modality encoders. The cross-modality encoders have 3 cross-attention blocks as illustrated in Figure 3 (b). The image (text) cross-modality encoder uses text (image) features to generate attentions. These cross-modality encoders can enhance the representation of one modality by interacting with another modality (Lu et al., 2020; Tan & Bansal, 2019).
Image and Text Masking: For text masking, we follow BERT (Devlin et al., 2018) with minor modifications. In BERT, the original tokens are replaced with either the [MASK] token or random tokens. We use only the [MASK] token to replace tokens to be masked (Wettig et al., 2022). For image masking, we follow (He et al., 2022; Xie et al., 2021) and use random masking of raw image patches with a masking patch size of 32× 32. Given that 224× 224 images are divided into 16× 16 patches for the image encoder, the large masking patch prevents the model from simply copying their neighborhood pixels for reconstruction (Xie et al., 2021).
Joint Reconstruction: We reconstruct the original signals of one modality from its masked input conditioned on the unmasked input of the other modality. Specifically, an original image, I , and a masked text, Tm, are used to reconstruct an original text, T , and similarly a masked image, Im, and an original text, T , are used to reconstruct the original image, I . For image reconstruction, (Im, T ) is first given to the image and the text encoders to obtain masked image features, vm, and unmasked text features, w. Following (Xie et al., 2021), we use both masked and unmasked patches to obtain vm. (vm,w) are further processed by the image cross-modality encoder, gim, where w is used to compute cross-attentions. The output of gim is mapped back to the original RGB image space by an image cross-modality decoder, gdeim, which consist of 3 cross-attention blocks followed by a fully connected layer (FC). Although existing work exploits a light-weight transformer decoder with only self-attention (He et al., 2022) or a simple linear mapping (Xie et al., 2021) for the image decoder, we use joint information between modalities to allow further interactions in decoding. For masked text reconstruction, a token classifier, gdetxt, which consists of a FC followed by softmax is applied to the output of the text cross-modality encoder, gtxt, for the token prediction. The masked V+L modeling loss, LMVLM , is defined as
LMVLM = E(I,T )∼D[H(yMT , ϕMtxt(I, Tm))︸ ︷︷ ︸ MLM + 1 Ω(IM ) ∥IM − ϕMim(Im, T )∥1︸ ︷︷ ︸
MIM
] , (1)
where ϕtxt = gdetxt(gtxt(fim(I), ftxt(Tm))) and ϕim = g de im(gim(fim(Im), ftxt(T ))). The loss is computed only for masked pixels and text tokens. Hence, the superscriptM denotes data or features correspond to the masked signals. A pair of I and T is sampled from the training dataset D, H denotes cross-entropy, and yMT is a matrix that contains one-hot row vectors for the ground truth of masked text tokens. Ω(·) is the number of pixels. When minimizing LMVLM , the model is enforced to reconstruct the original signals by attending to the other modality signals. Cross-attending for reconstruction enables the model to learn interaction between V+L modalities.
3.2 MULTI-MODAL ALIGNMENT
In addition to the masked signal modeling tasks, we adopt two additional tasks to explicitly learn multi-modality alignment. The first one is an image-text contrastive (ITC) learning (Radford et al., 2021; Jia et al., 2021). For the k-th pair of image and text features out of the image and text encoders, two separate FC layers are used to project the image [CLS] token features and the text [START] token features to the same dimensional feature space with unit norm, zkim and z k txt, respectively. The loss, LITC is computed as
LITC = − 1
N N∑ k=1
[ log
exp(zkim · zktxt/τ)∑N n=1 exp(z k im · zntxt/τ) + log exp(zkim · zktxt/τ)∑N n=1 exp(z n im · zktxt/τ)
] , (2)
where N and τ are the batch size and the temperature scaling parameter, respectively. The second task is an image-text matching (ITM) (Chen et al., 2020b; Li et al., 2021; 2020b), predicting whether an image and a text are aligned or not. The [CLS] and [START] token features from the image and text cross-modality encoders are zcrossim and z cross txt , respectively. To fuse these two features, we compute the element-wise product of zcrossim and z cross txt (z cross im ∗ zcrosstxt ), and a FC layer followed by softmax is applied to obtain the final prediction (Lu et al., 2019). For training, we use yITM = 1, when zcrossim and z cross txt are a pair. Otherwise, yITM = 0. The loss, LITM , is defined as
LITM = E(I,T )∼D[H(yITM , gitm(zcrossim ∗ zcrosstxt ))]. (3) Following (Li et al., 2021), we sample in-batch hard negatives based on the distribution of the cosine similarity between zim and ztxt. The overall pre-training loss, L, is defined as L = LMVLM + LITC+LITM .We term our model trained with loss L as MaskVLM (Masked Vision and Language Modeling).
3.3 PROBABILISTIC INTERPRETATION
We differentiate MaskVLM from the existing V+L models using masked signal modeling from a perspective of likelihood estimation. The training objective of masked signal modeling on unimodal signals, X , focuses on learning the data distribution p(X) which is formulated by the law of total probability as p(X) = ∑ Xm∈MX p(Xm) · p(X|Xm), where Xm is an instance of masked signal from the set of all possible masked signals, MX . MIM or MLM learns the data distribution by maximizing ∑ Xm∈MX p(X|Xm) (Bengio et al., 2013).
In V+L representation learning, the ultimate goal is to learn the joint distribution of multi-modal signals, p(I, T ). However, the authors in (Sohn et al., 2014) pointed out that directly maximizing the likelihood for the joint distribution is challenging because of the heterogeneous multimodal data distributions. Instead, they show minimizing variation of information defined as −E(I,T )∼D(log p(I|T ) + log p(T |I)) is sufficient to estimate the joint distribution. From a perspective of variation of information, the limitations in existing works can be better understood. Several existing works attempted to approximate the joint distribution using MLM with unmasked image (Duan et al., 2022; Li et al., 2021; 2019; Yang et al., 2022). In other words, p(T |I, Tm) is maximized to learn the conditional distribution, p(T |I), but p(I|T ) is not modeled. In other existing works (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019), where both modalities are masked, the visual masking is limited to mask the visual features extracted from a frozen object detector, ψ(·), instead of the raw image pixels. In this case, the distributions p(ψ(I)|T ) and p(T |ψ(I)) are modeled instead of p(I|T ) and p(T |I). This frozen feature extractor can bottleneck the direct estimation of the underlying data distribution. MaskVLM is trained endto-end to estimate both conditional distributions, p(I|T ) and p(T |I), which directly minimizes the variation of information. We hypothesize this modeling of conditional distributions for both modalities could lead to superior performance in both large-scale and limited data training scenarios, which we empirically demonstrated in Section 4.
4 EXPERIMENTS
4.1 PRE-TRAINING DATASETS AND DOWNSTREAM TASKS
We use the union of four datasets for pre-training so that we can perform a fair comparison with existing state-of-the-art methods (Chen et al., 2020b; Li et al., 2021). These datasets are Conceptual Captions (CC) (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2017), and COCO Captions (Lin et al., 2014). While VG and COCO contain captions annotated by humans, CC and SBU Captions are automatically collected from the web. The total number of unique images and image-text pairs in the four datasets are 4.1M and 5.2M, respectively. We term this pre-training dataset as the 4M dataset. We validate the pre-trained model on the following four downstream tasks:
Image-Text Retrieval: We perform text-to-image and image-to-text retrieval. We use the ITC and ITM losses of Section 3.2 for finetuning and the finetuned models are evaluated on COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015). In addition, since COCO is used for pre-training, zero-shot retrieval performance is reported on Flickr30k. In (Li et al., 2021), the model finetuned on COCO is used for the zero-shot evaluation on Flickr30k. Although it may result in better performance, we believe that using the finetuned model does not validate the zero-shot capability of the
pre-trained model. Therefore, we use the pre-trained model directly for zero-shot evaluation. Following (Li et al., 2021), we first retrieve top-k candidates using the similarity scores from the image and the text encoders. The top-k candidates are further processed by the cross-modality encoders to obtain the final retrieval results.
Visual Question Answering (VQA): Here, given an image and a question pair, the model should generate a correct answer. The model is evaluated on VQA v2 (Goyal et al., 2017). We adopt the answer generation framework (Cho et al., 2021) and finetune the base model with a fusion encoder and an answer decoder. The model architectures are visualized in Figure 6 (a) of Appendix. The fusion encoder consists of one cross-attention block shown in Figure 3 (b). The output from the text cross-modality encoder is used as queries, and the image cross-modality encoder output is utilized to create attentions in the fusion encoder. The architecture of the answer decoder is the same as that of the text cross-modality encoder, but it is trained with a language modeling loss to generate the answers. Specifically, the output of the fusion encoder is used for computing attentions and the answer tokens are autoregressively predicted. During inference, [START] token is used as an initial token to generate following answer tokens.
Natural Language for Visual Reasoning (NLVR): This tasks involves a binary classification with a triplet, (text, image1, image2). The goal here is to predict whether the text describes the pair of images. For finetuning, we feedforward (text, image1) and (text, image2) separately to extract the features as shown in Figure 6 (b). The [CLS] token features of image1 and image2 from the image encoder are denoted as v1 and v2, respectively. The [START] token text features from the text encoder is w. These features are processed by the cross-modality encoders. The outputs of the image and text cross-modality encoders are fused by element-wise multiplication. The fused features for both images are concatenated, and a classifier with two linear layers predicts whether the text is aligned with the image pair or not. NLVR2 (Suhr et al., 2018) is used for the evaluation.
Visual Entailment (VE): Given an image text pair, the task is to classify the relationship between the image and the text into one of three categories: entailment, neutral, and contradictory. The element-wise product of the output from the image and the text cross-modality encoders is forwarded to a classifier of two linear layers for prediction. SNLI-VE (Xie et al., 2019) is used for evaluation.
4.2 IMPLEMENTATION DETAILS
We use a Visual Transformer (ViT) (Dosovitskiy et al., 2020) pre-trained on ImageNet (Deng et al., 2009) and a pre-trained RoBERTa from (Liu et al., 2019) to initialize the image and the text encoder, respectively. We pre-train the model for 50 epochs when the 4M dataset is used and 30 epochs for all other experiments. A batch size of 512 is used with 16 NVIDIA Tesla V100 GPUs. All parameters are optimized using AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05. Following (Xie et al., 2021), we use the image masking ratio of 60%. While 15% masking ratio is used for text in language models (Devlin et al., 2018; Liu et al., 2019), we use 30% since the paired image can provide additional information for text reconstruction. During pre-training, the learning rate is warmed up to 3×10−4 in the first 5 epochs and decayed to 3×10−5 using a cosine scheduler. The learning rates for the image encoder and the text encoder are set to 10−5, which is less than that of the cross-modality encoders. An image size of 224 × 224 and RandAugment (Cubuk et al.,
2020) are used. During finetuning, the image is resized to 384× 384 and the positional encoding is interpolated following (Dosovitskiy et al., 2020). More details can be found in Appendix.
4.3 EVALUATION ON IMAGE-TEXT RETRIEVAL, VQA, NLVR, AND VE
We compare the finetuned image-text retrieval performance of the proposed MaskVLM with the state-of-the-art methods in Table 1. The second column is the number of unique images used for pretraining and the retrieval performance is evaluated in terms of Recall@k (R@k). We do not directly compare with ALIGN (Jia et al., 2021) since it is trained with more than 300 times of data used for MaskVLM. However, we still highlight the small performance gap between MaskVLM and ALIGN. We achieve the best performance in all Recall@k metrics on both COCO and Flickr30k except for the image retrieval R@10 and text retrieval R@5 on Flickr30k. Compared to ALIGN, we even achieve higher R@1 for image retrieval on COCO and text retrieval on Flickr30k. Table 2 shows the zero-shot retrieval performance of the state-of-the-art methods on Flickr30k. MaskVLM achieves a significant improvement over the second best method, ALBEF (Li et al., 2021), by 6.8 points at R@1 for image retrieval. Given that ALBEF is not trained for MIM, we hypothesize that ALBEF achieves the biased performance for text retrieval and MaskVLM achieves the significant improvement in image retrieval by additional MIM which models p(I|T ). While FLAVA exploits both MLM and MIM with the pre-trained image tokenizer, using 13 times more data than MaskVLM, MaskVLM still outperforms FLAVA by 9.8 and 19.3 points at R@1 for image and text retrieval respectively. Compared with CLIP (Radford et al., 2021) which is trained with at least 76 times more data than MaskVLM, we still achieve higher R@1 for image retrieval by 6.3 points. In general, MaskVLM achieves state-of-the-art performance in both finetuning and zero-shot experiments.
We report the accuracies on VQA, NLVR, and VE in Table 3. Except for SimVLM whose pretraining data is more than 300 times larger than that of MaskVLM, we consistently achieve the best
performances in all these tasks except for the validation split of NLVR2. In particular, MaskVLM is better than the second best method by 0.43, 1.14, and 0.27 on the test splits of VQA, NLVR2, and SNLI-VE, respectively. Compared to the base model of SimVLM, we narrow the accuracy gaps to 2.74% and 3.48% in test-std and test splits of VQA and VE, respectively. MaskVLM achieves higher accuracy than SimVLMbase in the test split of NLVR2 by 0.21%.
4.4 EVALUATION WITH LIMITED PRE-TRAINING DATA
We highlight the performance of MaskVLM in limited data scenarios. In particular, we create three subsets of the 4M pre-training data by sampling 50%, 25%, and 10% of CC and combining them with COCO. The number of imagetext pairs in each subset is around 39%, 25%, and 16% of the 4M pre-training data which contain 5.2M pairs, respectively. We pre-train models with these subsets of the data and analyze the downstream task performance in comparison with state-of-the-art methods. The results are reported in Table 4. Particularly, image and text retrieval R@1 performance on COCO is also visualized in Figure 4. We compare MaskVLM with
the most recent state-of-the-art methods, which are ALBEF (Li et al., 2021) and Codebook (Duan et al., 2022). In Table 4, as the size of pre-training data becomes smaller from CC 50% + COCO to CC 10% + COCO, the performance gap between MaskVLM and ALBEF increases from 6.39 to 8.71 at R@1 in COCO image retrieval (IR), 7.46 to 9.04 at R@1 in COCO text retrieval (TR), 1.17 to 1.79 in VQA and 0.31 to 1.24 in the test set of SNLI-VE. In NLVR2 and VQA, MaskVLM trained with CC 10% + COCO achieves higher accuracy than ALBEF trained with CC50% + COCO, which contains more than twice of image-text pairs in CC 10% + COCO. In Figure 4, while Codebook shows competitive recall performance compared to the MaskVLM with the 4M dataset ( 5.2M pairs), the R@1 differences in image and text retrieval, respectively, increase from 1.4 and 1.0 in the 4M dataset to 3.15 and 3.80 in CC 50% + COCO. Our model trained with CC25% + COCO outperforms Codebook trained with CC50% + COCO by 1.90 and 2.76 points in terms of image and text retrieval R@1, respectively. Since one of the main differences in MaskVLM compared to ALBEF and Codebook is the additional MIM, we believe that joint modeling of V+L contribute to better performance in limited data scenarios.
4.5 ABLATION STUDY
We perform an ablation study using different combinations of loss components to highlight the contribution of masked V+L modeling. We compare six models with the same architecture but with different loss functions for pre-training. We pre-train all models on the CC 50% + COCO dataset and compare finetuned and zero-shot retrieval performance on Flickr30k in Table 5. We note that zero-shot evaluation of the MLM + MIM model cannot be performed because the FC layers to compute ITM and ITC are not trained during pre-training. ITC and ITM are closely related to the retrieval task since they are used for finetuning and measuring the similarity between images and texts. However, MLM + MIM still achieves significantly better finetuned and zero-shot performance than ITC, which shows that MLM + MIM alone learns meaningful V+L representations. Compared to ITC+ ITM in the finetuned performance, ITC + ITM + MLM achieves slightly improved R@1
by 0.38 in image retrieval and degraded R@1 by 0.3 in text retrieval. When MIM alone is used with ITC + ITM as well, finetuned R@1 is improved by 0.16 and degraded by 0.8 for image and text retrieval, respectively, over ITC + ITM. On the other hand, when ITC + ITM + MLM + MIM is used, the model achieves significant improvement of finetuned performance over ITC + ITM + MLM by 0.92 and 2.10 for R@1 image and text retrieval, respectively. ITC + ITM + MLM + MIM also obtains the best performance in zero-shot retrieval. This result further supports the advantage of joint modeling for masked V+L signals.
4.6 QUALITATIVE RESULTS
We perform a qualitative analysis to show the role of multi-modal information in the reconstruction of masked signals from our model. To be specific, we illustrate the prediction of masked text tokens with and without the corresponding images. This illustration highlights how MaskVLM effectively utilizes both modality information to complete the masked signal modeling task. Figure 5 shows the reconstruction of masked texts using original images (“Recon (org)”) and masked images (“Recon (mask)”). In the first top example, when the model is given a masked text and a masked image which does not contain the “dog”, the reconstruction is performed by using only available information such as image patches of “green grass”. Thus, the prediction is limited to “a brown of motion” or “the green forest”. However, when the original image is used for reconstruction, both “a brown and white dog” and “white fence” are reconstructed by accurately attending to the image. In the bottom example, the visible patches of the masked image contain a few people, but lack background information. Consequently, the reconstruction with the masked image does not contain any background information but the background “cave” is reflected in the reconstruction with the original image. Theses examples confirm that MaskVLM has learned to perform masked signal modeling using both V+L information.
5 CONCLUSION
We propose masked vision and language modeling as a pre-training task for learning V+L representations. We provide a probabilistic interpretation to highlight the contribution of the proposed method and validate its advantages in both large-scale and limited data regimes. We consistently achieve the state-of-the-art performance in a broad range of V+L tasks.
ACKNOWLEDGMENTS
We thank Jiali Duan for providing results of the Codebook (Duan et al., 2022) with limited pretraining data in Figure 4.
A APPENDIX
A.1 DETAILS ON FINETUNING FOR DOWNSTREAM TASKS
We explain implementation details for each of the downstream tasks. For all the downstream tasks, we use AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05 and the cosine scheduler. An image size of 384 × 384 with RandAugment (Cubuk et al., 2020) is utilized and the positional encoding is interpolated following (Dosovitskiy et al., 2020). Except for the VQA task, we use the model achieves the best performance in the validation set to report the performance on the test set. We use the last epoch model for the VQA evaluation.
Image-Text Retrieval: COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015) are used to report the performance. To be specific, we follow data splits proposed in (Karpathy & Fei-Fei, 2015) and an average recall over image and text retrieval is used to find the best model in the validation set. The pre-trained model is finetuned for 15 epochs with a batch size of 256 and a learning rate of 1× 10−5. Visual Question Answering (VQA): For a fair comparison with existing methods (Chen et al., 2020b; Li et al., 2021), we use training and validation sets from VQA v2.0 (Goyal et al., 2017) with a subset of VQA samples from Visual Genome (Krishna et al., 2017) for training. Also, we report performance on both test-dev and test-std splits of VQA v2.0. Following (Li et al., 2021), we weight the loss for each answer based on its occurrence among all the answers. The model is finetuned for 15 epochs with a batch size of 256 . We use a learning rate of 2 × 10−5 for the image and the text cross-modality encoders, the fusion encoder, and the answer decoder. For the image and the text encoders, a learning rate of 1 × 10−5 is used. The fusion encoder and the answer decoder are initialized by the last and all three blocks of the pre-trained text cross-modality encoder, respectively.
Natural Language for Visual Reasoning (NLVR): Data splits proposed in (Suhr et al., 2018) are used for finetuning and evaluation. The model is finetuned for 5 epochs with a batch size of 128. Since the classifier is newly added after finetuning, we use a learning rate of 1 × 10−4 for the classifier and 1 × 10−5 for the remaining parts of the model. Different from (Duan et al., 2022; Li et al., 2021; Yang et al., 2022), where the models require additional text-assignment pre-training step before finetuning, we directly finetune for simplicity.
Visual Entailment (VE): We follow data splits proposed in SNLI-VE (Xie et al., 2019). We finetune the model with a batch size of 256 for 5 epochs. Similar to the NLVR task, a learning rate of 1× 10−4 is used for the classifier and 1× 10−5 is used for the remaining parts of the model.
A.2 REPRODUCIBILITY
We add more details of MaskVLM for reproducibility. We used the ImageNet pretrained ViT (vit base patch16 224) from (Wightman, 2019) and the pre-trained RoBERTa (roberta-base) from Hugging Face (Wolf et al., 2020). The detailed model architectures are visualized in Figure 7. Following (Dosovitskiy et al., 2020), the image encoder uses layer normalization (Ba et al., 2016) before each multi-head attention block while the text encoder applies layer normalization after each multi-head attention block (post norm). For the image (text) cross-modality encoder, we adopt the post norm and use the outputs of the text (image) encoder as keys and values
to compute cross-attention. To compute MIM and MLM, the self-attention outputs of the masked image features, vm, is used as queries and the unmasked text features, w, are used as keys and values in the image cross-modality encoder. For the text cross-modality encoder, the masked text features, wm, are used as queries and the unmasked image features, v, are used as keys and values. To keep the framework simple, we do not use any loss weighting for each loss term and layer decay during finetuning.
A.3 ABLATION STUDY ON MASKING STRATEGIES
We study different masking strategies in computing MIM and MLM losses. In particular, we compare MaskVLM using one modality masked and the other modality unmasked for reconstruction (MaskVLM (one)) and MaskVLM using both modalities masked at the same time for reconstruction. We compare these two MaskVLM models with the state-of-the-art method, ALBEF in Table 6. We follow the experimental setup described in Section 4.1 and report the finetuning performance on image-text retrieval. The performance of masking one modality at a time (MaskVLM (one)) was slightly better than masking both modalities at the same time (MaskVLM (both)). However, we observed that both reconstruction strategies are still effective as they achieve higher R@1 for image and text retrieval in both COCO and Flickr30k compared to ALBEF.
A.4 ABLATION STUDY ON MASKING RATIO
We perform ablation study using different masking ratios for masked vision and language modeling. In particular, we pre-train MaskVLM with several combinations of image and text masking ratios on the CC 50% + COCO dataset and report the finetuned image-text retrieval performance on Flickr30k in Table 7. We also report an average of R@k for image and text retrieval. When only image masking ratio is changed in the first three rows of the table, the difference between the maximum and the minimum of the average recall is 0.26 for image retrieval and 0.10 for text retrieval. This shows that MaskVLM achieves stable performance across the tested image masking ratios. From
comparison between the second row and the last row, we observe that increasing the text masking ratio from 0.15 to 0.3 leads to higher recall performance for both image and text retrieval.
A.5 EVALUATION ON IMAGE RECOGNITION
We evaluate the image recognition performance of MaskVLM. Following CLIP (Radford et al., 2021), we perform image classification directly using the pre-trained MaskVLM on various image recognition datasets including UC Merced Land Use (Yang & Newsam, 2010), MIT-67 (Quattoni & Torralba, 2009), CUB-200 (Wah et al., 2011), Oxford Flowers (Nilsback & Zisserman, 2008), Caltech-256 (Griffin et al., 2007), and ImageNet-1K (Deng et al., 2009). We compare the Top1 accuracy of MaskVLM with ALBEF (Li et al., 2021) in Table 8. Both models are pre-trained with the 4M dataset. Since during pre-training stage, both MaskVLM and ALBEF were initialized with the ImageNet pre-trained weights, the evaluation on ImageNet is not strictly zero-shot but the evaluation on other datasets is. We formulate image classification as image-to-text retrieval where the similarity scores between a query image and all the class names are computed to retrieve top-1 class name. The similarity scores can be obtained using either ITC and ITM scores, and we report them separately. Also, the results of using prompt engineering as in CLIP are reported.
As shown in Table 8, MaskVLM consistently outperforms ALBEF across all the datasets. In particular, prompt engineering improves the average accuracy across all the datasets for MaskVLM but ALBEF achieves lower average accuracy with prompt engineering. This shows that MaskVLM can better align images with variants of text than ALBEF, which results in stronger V+L representations of MaskVLM for the image recognition task.
A.6 ADDITIONAL EXAMPLES FOR THE QUALITATIVE ANALYSIS
We present additional examples for the qualitative analysis of MaskVLM in Figure 8. Similar to Figure 5, masked text tokens are reconstructed with masked images (“Recon (mask)”) and original images (“Recon (org)”). We highlight that MaskVLM utilizes both V+L information to reconstruct the text which corresponds to the given image.
A.7 STATISTICS OF THE PRE-TRAINING DATASET
In Table 9, we report the statistics of the 4M pre-training dataset that MaskVLM is trained on. We note that some data urls provided in the web datasets can become invalid, which may lead to slightly different number of image-text pairs depending on when the datasets are downloaded. | 1. What is the main contribution of the paper regarding masked language and image modeling?
2. What are the strengths and weaknesses of the proposed approach compared to other recent works in the field?
3. Do you have any concerns regarding the presentation and explanation of the method?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
5. Are there any typos or confusing aspects in the equation presented in the review? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
The authors propose masked language and image modeling to pre-train image and text representations for downstream tasks such as image-text retrieval and various image-language reasoning tasks (VQA, NLVR, visual entailment). The proposed model is trained on paired image-text data, where one modality is masked while the other is kept unmasked. Given these two inputs the goal is to reconstruct the masked modality. In addition, a contrastive text-image loss (like CLIP) and an image-text matching loss are used.
The authors compare against a multitude of baselines on the tasks, which for the wide majority they beat. In addition, they show that their method works better in the low-data regime, probably because of the additional masked image modeling objective. The losses in the model are then ablated on the image-text retrieval text and it is shown that adding MIM and MLM improves the model. Finally, qualitative results are presented showing how additional cross-modal information helps prediction. Ablations on masking strategy and ratios are given in the appendix.
Strengths And Weaknesses
Strengths:
The authors compare against a large number of baselines, and are able to show better performance in almost all instances.
Qualitative results as well as ablation of the objectives for pre-training provide compelling evidence that cross-modal masked modeling can help to produce better representations.
Well designed experiment shows that in the low-data regime, adding a masked image modeling loss can improve performance by making more efficient use of the data.
Good ablation studies of masking ratios and strategies in the appendix.
Weaknesses:
One significant problem with the paper is that many recent and concurrent methods and references are missing. As a result, many of the tables that claim to compare MaskVLM to SOTA results do not actually:
Table 1 lists SOTA methods on MSCOCO and Flickr30k image-text retrieval with finetuning. It is however missing Florence, which beats all methods in the table.
Table 2 lists SOTA methods on zero-shot Flickr30k image-text retrieval. It is also missing Florence, and CoCA.
Table 3 lists SOTA methods on VQA but is missing the above methods and SimVLM and Flamingo.
If the claim is that MaskVLM achieves SOTA with specific qualifications (such as pre-training on a smaller dataset) this should be stated and shown explicitly. If the claim is that MaskVLM achieves SOTA outright, then I don't think this is true.
Some papers that should be cited:
Florence
CoCA and SimVLM
Flamingo
MultiMAE
M3AE (could be optional I think)
BEiT-3 (this came out after ICLR submission deadline, but is relevant)
In addition, claims such as "In the domain of V+L learning, there exist limited works on end-to-end joint modeling of V+L signals" are untrue, in light of the above references.
The method and its presentation are also a tad complicated. It could help significantly if the explanation of the method was cleaned up a bit. Here are some things that I found confusing:
The first term in Eq. 1 appears to show the MLM loss is applied to unmasked tokens as well as masked tokens, as opposed to just masked tokens as in BERT. Is this a typo, or actually the case? And if so, why was this decision made?
In Eq. 1, there is an expectation over D on the second term but not the first. I'm assuming this is a typo?
One more nitpick for eq 1: I think
g
i
m
d
e
in the definition of
ϕ
i
m
should also take
f
t
x
t
(
T
)
as an argument, as the image decoder is also allowed to cross-attend to vision text information.
Why use an L1 loss for MIM rather than L2 like in MAE?
In order to fuse representations, element-wise multiplication is used (in the downstream task heads, and the ITM loss). This seems non-standard to me, so perhaps a reference to previous work that does this could be helpful.
One small nitpick: MAE is cited as a preprint, but it was published at CVPR 2022.
Clarity, Quality, Novelty And Reproducibility
Clarity: For the most part clear. The method exposition and the math could be tidied up a bit.
Quality: Decent quality.
Novelty: Somewhat novel. MaskVLM is a rearrangement of existing basic building blocks (cross-attention, ITM loss, contrastive loss, MLM/MIM) from existing papers in a novel way.
Reproducibility: Could probably be roughly reimplemented from the details in the paper and the appendix, although a few details are missing (pre or post norm in the transformers, layer decay during fine tuning, weighting on the three losses, etc). |
ICLR | Title
Masked Vision and Language Modeling for Multi-modal Representation Learning
Abstract
In this paper, we study how to use masked signal modeling in vision and language (V+L) representation learning. Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality. This is motivated by the nature of image-text paired data that both of the image and the text convey almost the same information but in different formats. The masked signal reconstruction of one modality conditioned on another modality can also implicitly learn cross-modal alignment between language tokens and image patches. Our experiments on various V+L tasks show that the proposed method, along with common V+L alignment losses, achieves state-of-the-art performance in the regime of millions of pre-training data. Also, we outperforms the other competitors by a significant margin in limited data scenarios.
1 INTRODUCTION
Vision and language (V+L) representation learning has gained significant attention due to the transferablility of the representations to a diverse set of downstream tasks such as zero- or few-shot visual recognition (Jia et al., 2021; Radford et al., 2021; Tsimpoukelli et al., 2021), object detection (Cai et al., 2022; Kamath et al., 2021), information retrieval (Li et al., 2022; 2021), and multi-modal generation (Ramesh et al., 2022; 2021) etc. This success is mainly driven by large-scale pre-training with paired image and text data. The current V+L pre-training techniques particularly focus on the representation learning that characterizes the association between vision and language, and they are largely inspired by self-supervised learning techniques (Devlin et al., 2018; He et al., 2020) in uni-modal learning.
Masked signal modeling is a popular self-supervisory pre-training task (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022), which aims at reconstructing the masked signals from the unmasked ones. It has been independently explored in the domains of natural language processing (NLP) and computer vision (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022). For example, in the domain of NLP, BERT (Devlin et al., 2018) and several follow-up works (Liu et al., 2019; Yang et al., 2019) utilize masked language modeling (MLM) where the model is expected to predict the masked text tokens using unmasked tokens. They have shown that MLM leads to powerful generalization performance across diverse NLP tasks. In computer vision, as shown in Figure 1 (top-left), the masked image modeling (MIM) is to predict masked pixels or image patches using unmasked portions of the images. MIM has shown to be an effective pre-training task for learning visual representations (Bao et al., 2021; Xie et al., 2021; He et al., 2022).
While MLM and MIM have been actively explored in each domain, existing works do not fully leverage the masked multi-modal signal modeling in the domain of V+L. For example, as shown in Figure 1 (bottom-left), several approaches rely only on MLM with unmasked images and do not model the masked images (Duan et al., 2022; Li et al., 2022; 2021; 2019; Yang et al., 2022). In this case, the distribution of text given image, p(T |I), can be learned, but the distribution of image given text, P (I|T ), cannot. This will potentially lead to biased performance in cross-modal retrieval tasks such as image-to-text or text-to-image retrieval as shown in our experiments. Although there
exist works that use both modality signals masked, they either use a frozen object detector to extract region-based visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019) or mask the image tokens from a pre-trained image tokenizer instead of the raw RGB pixels (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022). These frozen object detector and image tokenizer not only require additional data for training but also prevent the V+L interactions from being learned end-to-end.
In this paper, we propose joint masked V+L modeling where the original signal is reconstructed by using its masked input and the corresponding unmasked input from the other modality. As illustrated in Figure 1 (right part), although we exploit random masking, the dog face in the image can be used to predict the masked text token “dog” and the text “green ball” can be used to reconstruct the corresponding masked patches in the image. To ensure that the model uses information from both modalities, we explicitly enforce the model to utilize cross-attention to generate the joint representations. Compared with the aforementioned existing works, our approach models both conditional distributions, p(I|T ) and p(T |I). Also, the model is trained end-to-end, without frozen bottleneck components that disturb learning interactions between V+L. By reconstructing one modality signal from the corresponding the other modality signal (e.g. reconstructing the text “dog” from the visual signals of dog face), the model implicitly learns the alignment between V+L. In addition, we observe that the model trained for the joint masked V+L modeling becomes noticeably effective when the training data is limited. Overall, our contributions are summarized as below:
1. We propose a joint masked V+L modeling task. We show that models pre-trained with the proposed task, along with common V+L alignment tasks such as image-text matching, achieves state-of-the-art performance on a broad rage of V+L tasks.
2. We provide a probabilistic interpretation of the proposed method and highlight the difference between ours and existing approaches in terms of the V+L joint distribution estimation.
3. We achieve significantly better performance than other V+L models in the regimes of limited training data, and only ∼40% of data used by the state-of-the-art models is sufficient to match their performance.
2 RELATED WORK
Vision and language representation learning The methods in V+L representation learning can be categorized based on how the information is fused between the modalities to obtain the joint representations. We group the fusion techniques into three categories: 1) transformers with attention across modalities, 2) contrastive learning with a large-scale pre-training data, 3) a hybrid form of learning with cross-attention and a contrastive loss. The attention across modalities has been widely used with image features extracted from off-the-shelf object detectors and text features obtained from transformer encoders (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Zhang et al., 2021; Li et al., 2020b; Su et al., 2019; Li et al., 2019). While cross-attention effectively aligns V+L representations, it is computationally expensive since all possible pairs of images
and texts need to be processed. On the contrary, the authors in (Jia et al., 2021; Radford et al., 2021; Mokady et al., 2021; Shen et al., 2021; Yuan et al., 2021) show that contrastive learning with uni-modal encoders and millions of image-text pairs can achieve powerful zero-shot performance in diverse V+L tasks. The contrastive learning-based approaches do not rely on computationally expensive cross-attention but require an excessively large amount of training data. Hence, a combination of contrastive loss and cross-attention is utilized by complementing limitations of both approaches in (Li et al., 2021; 2022; Yang et al., 2022; Duan et al., 2022). In particular, only image and text pairs that result in high similarity by uni-modal encoders are processed using the cross-attention layers to reduce the computational burden and improve the alignment.
Masked signal modeling is a commonly used pre-training objective in the aforementioned V+L models. It has been independently explored in each of vision and language domain. In the NLP domain, BERT and its variants (Devlin et al., 2018; Liu et al., 2019) achieve representations that can generalize to a broad range of NLP tasks through MLM. Autoregressive language models (Radford et al., 2018; 2019) which predict masked future tokens have shown to be effective self-supervised learners. The success of the language models leads to several MIM techniques. BeiT (Bao et al., 2021) is trained to recover masked visual tokens which are obtained by a discrete variational autoencoder (dVAE). In SimMIM (Xie et al., 2021) and MAE (He et al., 2022), transformers are trained to recover masked patches in an end-to-end fashion. The authors in (Chen et al., 2020a) propose to autoregressively predict the unknown pixels to learn visual representations. In (Bachmann et al., 2022), multiple vision modality data are masked and reconstructed to learn visual representations. In the domain of V+L learning, (Arici et al., 2021) explores MIM and MLM for catalog data with short text attributes. V+L models with an object detector often aim at recovering only bounding box visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Su et al., 2019). Several V+L models focus on predicting future text tokens without MIM (Wang et al., 2021; Yu et al., 2022; Alayrac et al., 2022). While both MIM and MLM are explored in (Geng et al., 2022), the trained model is evaluated only on vision tasks. In (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022; Wang et al., 2022), image tokens defined by image tokenizers trained with additional 250 million images (Ramesh et al., 2021) or distillation from the CLIP model (Radford et al., 2021) trained with 400 million image-text pairs (Peng et al., 2022) are reconstructed. In our work, we eliminate these model and data dependencies, by directly recovering RGB pixels and text tokens from masked image patches and masked text tokens. Therefore, MIM and MLM are seamlessly integrated to achieve generalizable V+L representations within a simple training framework.
3 METHOD
Our method has two types of pre-training objectives, which are 1) masked vision and language modeling and 2) multi-modal alignment. We explain each pre-training objective in this section.
3.1 MASKED VISION AND LANGUAGE MODELING
The overall framework of masked vision and language modeling is shown in Figure 2. We use transformer-based encoders (Vaswani et al., 2017) for both image and text streams. Given an imagetext pair (I, T ), an image encoder, fim, is used to extract features, v = {vcls, v1, ..., vN}, from the image input I . N is the number of image patches and vcls is the encoding of the image class token, [CLS]. The text encoder, ftxt, extracts features, w = {wcls, w1, ..., wM}, from the text input, T . M is the number of text tokens and wcls is the encoding of the start token of a sentence, [START]. The image and the text encoder consist of 12 self-attention blocks as shown in Figure 3 (a). The image and the text features are further processed by image and text cross-modality encoders. The cross-modality encoders have 3 cross-attention blocks as illustrated in Figure 3 (b). The image (text) cross-modality encoder uses text (image) features to generate attentions. These cross-modality encoders can enhance the representation of one modality by interacting with another modality (Lu et al., 2020; Tan & Bansal, 2019).
Image and Text Masking: For text masking, we follow BERT (Devlin et al., 2018) with minor modifications. In BERT, the original tokens are replaced with either the [MASK] token or random tokens. We use only the [MASK] token to replace tokens to be masked (Wettig et al., 2022). For image masking, we follow (He et al., 2022; Xie et al., 2021) and use random masking of raw image patches with a masking patch size of 32× 32. Given that 224× 224 images are divided into 16× 16 patches for the image encoder, the large masking patch prevents the model from simply copying their neighborhood pixels for reconstruction (Xie et al., 2021).
Joint Reconstruction: We reconstruct the original signals of one modality from its masked input conditioned on the unmasked input of the other modality. Specifically, an original image, I , and a masked text, Tm, are used to reconstruct an original text, T , and similarly a masked image, Im, and an original text, T , are used to reconstruct the original image, I . For image reconstruction, (Im, T ) is first given to the image and the text encoders to obtain masked image features, vm, and unmasked text features, w. Following (Xie et al., 2021), we use both masked and unmasked patches to obtain vm. (vm,w) are further processed by the image cross-modality encoder, gim, where w is used to compute cross-attentions. The output of gim is mapped back to the original RGB image space by an image cross-modality decoder, gdeim, which consist of 3 cross-attention blocks followed by a fully connected layer (FC). Although existing work exploits a light-weight transformer decoder with only self-attention (He et al., 2022) or a simple linear mapping (Xie et al., 2021) for the image decoder, we use joint information between modalities to allow further interactions in decoding. For masked text reconstruction, a token classifier, gdetxt, which consists of a FC followed by softmax is applied to the output of the text cross-modality encoder, gtxt, for the token prediction. The masked V+L modeling loss, LMVLM , is defined as
LMVLM = E(I,T )∼D[H(yMT , ϕMtxt(I, Tm))︸ ︷︷ ︸ MLM + 1 Ω(IM ) ∥IM − ϕMim(Im, T )∥1︸ ︷︷ ︸
MIM
] , (1)
where ϕtxt = gdetxt(gtxt(fim(I), ftxt(Tm))) and ϕim = g de im(gim(fim(Im), ftxt(T ))). The loss is computed only for masked pixels and text tokens. Hence, the superscriptM denotes data or features correspond to the masked signals. A pair of I and T is sampled from the training dataset D, H denotes cross-entropy, and yMT is a matrix that contains one-hot row vectors for the ground truth of masked text tokens. Ω(·) is the number of pixels. When minimizing LMVLM , the model is enforced to reconstruct the original signals by attending to the other modality signals. Cross-attending for reconstruction enables the model to learn interaction between V+L modalities.
3.2 MULTI-MODAL ALIGNMENT
In addition to the masked signal modeling tasks, we adopt two additional tasks to explicitly learn multi-modality alignment. The first one is an image-text contrastive (ITC) learning (Radford et al., 2021; Jia et al., 2021). For the k-th pair of image and text features out of the image and text encoders, two separate FC layers are used to project the image [CLS] token features and the text [START] token features to the same dimensional feature space with unit norm, zkim and z k txt, respectively. The loss, LITC is computed as
LITC = − 1
N N∑ k=1
[ log
exp(zkim · zktxt/τ)∑N n=1 exp(z k im · zntxt/τ) + log exp(zkim · zktxt/τ)∑N n=1 exp(z n im · zktxt/τ)
] , (2)
where N and τ are the batch size and the temperature scaling parameter, respectively. The second task is an image-text matching (ITM) (Chen et al., 2020b; Li et al., 2021; 2020b), predicting whether an image and a text are aligned or not. The [CLS] and [START] token features from the image and text cross-modality encoders are zcrossim and z cross txt , respectively. To fuse these two features, we compute the element-wise product of zcrossim and z cross txt (z cross im ∗ zcrosstxt ), and a FC layer followed by softmax is applied to obtain the final prediction (Lu et al., 2019). For training, we use yITM = 1, when zcrossim and z cross txt are a pair. Otherwise, yITM = 0. The loss, LITM , is defined as
LITM = E(I,T )∼D[H(yITM , gitm(zcrossim ∗ zcrosstxt ))]. (3) Following (Li et al., 2021), we sample in-batch hard negatives based on the distribution of the cosine similarity between zim and ztxt. The overall pre-training loss, L, is defined as L = LMVLM + LITC+LITM .We term our model trained with loss L as MaskVLM (Masked Vision and Language Modeling).
3.3 PROBABILISTIC INTERPRETATION
We differentiate MaskVLM from the existing V+L models using masked signal modeling from a perspective of likelihood estimation. The training objective of masked signal modeling on unimodal signals, X , focuses on learning the data distribution p(X) which is formulated by the law of total probability as p(X) = ∑ Xm∈MX p(Xm) · p(X|Xm), where Xm is an instance of masked signal from the set of all possible masked signals, MX . MIM or MLM learns the data distribution by maximizing ∑ Xm∈MX p(X|Xm) (Bengio et al., 2013).
In V+L representation learning, the ultimate goal is to learn the joint distribution of multi-modal signals, p(I, T ). However, the authors in (Sohn et al., 2014) pointed out that directly maximizing the likelihood for the joint distribution is challenging because of the heterogeneous multimodal data distributions. Instead, they show minimizing variation of information defined as −E(I,T )∼D(log p(I|T ) + log p(T |I)) is sufficient to estimate the joint distribution. From a perspective of variation of information, the limitations in existing works can be better understood. Several existing works attempted to approximate the joint distribution using MLM with unmasked image (Duan et al., 2022; Li et al., 2021; 2019; Yang et al., 2022). In other words, p(T |I, Tm) is maximized to learn the conditional distribution, p(T |I), but p(I|T ) is not modeled. In other existing works (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019), where both modalities are masked, the visual masking is limited to mask the visual features extracted from a frozen object detector, ψ(·), instead of the raw image pixels. In this case, the distributions p(ψ(I)|T ) and p(T |ψ(I)) are modeled instead of p(I|T ) and p(T |I). This frozen feature extractor can bottleneck the direct estimation of the underlying data distribution. MaskVLM is trained endto-end to estimate both conditional distributions, p(I|T ) and p(T |I), which directly minimizes the variation of information. We hypothesize this modeling of conditional distributions for both modalities could lead to superior performance in both large-scale and limited data training scenarios, which we empirically demonstrated in Section 4.
4 EXPERIMENTS
4.1 PRE-TRAINING DATASETS AND DOWNSTREAM TASKS
We use the union of four datasets for pre-training so that we can perform a fair comparison with existing state-of-the-art methods (Chen et al., 2020b; Li et al., 2021). These datasets are Conceptual Captions (CC) (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2017), and COCO Captions (Lin et al., 2014). While VG and COCO contain captions annotated by humans, CC and SBU Captions are automatically collected from the web. The total number of unique images and image-text pairs in the four datasets are 4.1M and 5.2M, respectively. We term this pre-training dataset as the 4M dataset. We validate the pre-trained model on the following four downstream tasks:
Image-Text Retrieval: We perform text-to-image and image-to-text retrieval. We use the ITC and ITM losses of Section 3.2 for finetuning and the finetuned models are evaluated on COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015). In addition, since COCO is used for pre-training, zero-shot retrieval performance is reported on Flickr30k. In (Li et al., 2021), the model finetuned on COCO is used for the zero-shot evaluation on Flickr30k. Although it may result in better performance, we believe that using the finetuned model does not validate the zero-shot capability of the
pre-trained model. Therefore, we use the pre-trained model directly for zero-shot evaluation. Following (Li et al., 2021), we first retrieve top-k candidates using the similarity scores from the image and the text encoders. The top-k candidates are further processed by the cross-modality encoders to obtain the final retrieval results.
Visual Question Answering (VQA): Here, given an image and a question pair, the model should generate a correct answer. The model is evaluated on VQA v2 (Goyal et al., 2017). We adopt the answer generation framework (Cho et al., 2021) and finetune the base model with a fusion encoder and an answer decoder. The model architectures are visualized in Figure 6 (a) of Appendix. The fusion encoder consists of one cross-attention block shown in Figure 3 (b). The output from the text cross-modality encoder is used as queries, and the image cross-modality encoder output is utilized to create attentions in the fusion encoder. The architecture of the answer decoder is the same as that of the text cross-modality encoder, but it is trained with a language modeling loss to generate the answers. Specifically, the output of the fusion encoder is used for computing attentions and the answer tokens are autoregressively predicted. During inference, [START] token is used as an initial token to generate following answer tokens.
Natural Language for Visual Reasoning (NLVR): This tasks involves a binary classification with a triplet, (text, image1, image2). The goal here is to predict whether the text describes the pair of images. For finetuning, we feedforward (text, image1) and (text, image2) separately to extract the features as shown in Figure 6 (b). The [CLS] token features of image1 and image2 from the image encoder are denoted as v1 and v2, respectively. The [START] token text features from the text encoder is w. These features are processed by the cross-modality encoders. The outputs of the image and text cross-modality encoders are fused by element-wise multiplication. The fused features for both images are concatenated, and a classifier with two linear layers predicts whether the text is aligned with the image pair or not. NLVR2 (Suhr et al., 2018) is used for the evaluation.
Visual Entailment (VE): Given an image text pair, the task is to classify the relationship between the image and the text into one of three categories: entailment, neutral, and contradictory. The element-wise product of the output from the image and the text cross-modality encoders is forwarded to a classifier of two linear layers for prediction. SNLI-VE (Xie et al., 2019) is used for evaluation.
4.2 IMPLEMENTATION DETAILS
We use a Visual Transformer (ViT) (Dosovitskiy et al., 2020) pre-trained on ImageNet (Deng et al., 2009) and a pre-trained RoBERTa from (Liu et al., 2019) to initialize the image and the text encoder, respectively. We pre-train the model for 50 epochs when the 4M dataset is used and 30 epochs for all other experiments. A batch size of 512 is used with 16 NVIDIA Tesla V100 GPUs. All parameters are optimized using AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05. Following (Xie et al., 2021), we use the image masking ratio of 60%. While 15% masking ratio is used for text in language models (Devlin et al., 2018; Liu et al., 2019), we use 30% since the paired image can provide additional information for text reconstruction. During pre-training, the learning rate is warmed up to 3×10−4 in the first 5 epochs and decayed to 3×10−5 using a cosine scheduler. The learning rates for the image encoder and the text encoder are set to 10−5, which is less than that of the cross-modality encoders. An image size of 224 × 224 and RandAugment (Cubuk et al.,
2020) are used. During finetuning, the image is resized to 384× 384 and the positional encoding is interpolated following (Dosovitskiy et al., 2020). More details can be found in Appendix.
4.3 EVALUATION ON IMAGE-TEXT RETRIEVAL, VQA, NLVR, AND VE
We compare the finetuned image-text retrieval performance of the proposed MaskVLM with the state-of-the-art methods in Table 1. The second column is the number of unique images used for pretraining and the retrieval performance is evaluated in terms of Recall@k (R@k). We do not directly compare with ALIGN (Jia et al., 2021) since it is trained with more than 300 times of data used for MaskVLM. However, we still highlight the small performance gap between MaskVLM and ALIGN. We achieve the best performance in all Recall@k metrics on both COCO and Flickr30k except for the image retrieval R@10 and text retrieval R@5 on Flickr30k. Compared to ALIGN, we even achieve higher R@1 for image retrieval on COCO and text retrieval on Flickr30k. Table 2 shows the zero-shot retrieval performance of the state-of-the-art methods on Flickr30k. MaskVLM achieves a significant improvement over the second best method, ALBEF (Li et al., 2021), by 6.8 points at R@1 for image retrieval. Given that ALBEF is not trained for MIM, we hypothesize that ALBEF achieves the biased performance for text retrieval and MaskVLM achieves the significant improvement in image retrieval by additional MIM which models p(I|T ). While FLAVA exploits both MLM and MIM with the pre-trained image tokenizer, using 13 times more data than MaskVLM, MaskVLM still outperforms FLAVA by 9.8 and 19.3 points at R@1 for image and text retrieval respectively. Compared with CLIP (Radford et al., 2021) which is trained with at least 76 times more data than MaskVLM, we still achieve higher R@1 for image retrieval by 6.3 points. In general, MaskVLM achieves state-of-the-art performance in both finetuning and zero-shot experiments.
We report the accuracies on VQA, NLVR, and VE in Table 3. Except for SimVLM whose pretraining data is more than 300 times larger than that of MaskVLM, we consistently achieve the best
performances in all these tasks except for the validation split of NLVR2. In particular, MaskVLM is better than the second best method by 0.43, 1.14, and 0.27 on the test splits of VQA, NLVR2, and SNLI-VE, respectively. Compared to the base model of SimVLM, we narrow the accuracy gaps to 2.74% and 3.48% in test-std and test splits of VQA and VE, respectively. MaskVLM achieves higher accuracy than SimVLMbase in the test split of NLVR2 by 0.21%.
4.4 EVALUATION WITH LIMITED PRE-TRAINING DATA
We highlight the performance of MaskVLM in limited data scenarios. In particular, we create three subsets of the 4M pre-training data by sampling 50%, 25%, and 10% of CC and combining them with COCO. The number of imagetext pairs in each subset is around 39%, 25%, and 16% of the 4M pre-training data which contain 5.2M pairs, respectively. We pre-train models with these subsets of the data and analyze the downstream task performance in comparison with state-of-the-art methods. The results are reported in Table 4. Particularly, image and text retrieval R@1 performance on COCO is also visualized in Figure 4. We compare MaskVLM with
the most recent state-of-the-art methods, which are ALBEF (Li et al., 2021) and Codebook (Duan et al., 2022). In Table 4, as the size of pre-training data becomes smaller from CC 50% + COCO to CC 10% + COCO, the performance gap between MaskVLM and ALBEF increases from 6.39 to 8.71 at R@1 in COCO image retrieval (IR), 7.46 to 9.04 at R@1 in COCO text retrieval (TR), 1.17 to 1.79 in VQA and 0.31 to 1.24 in the test set of SNLI-VE. In NLVR2 and VQA, MaskVLM trained with CC 10% + COCO achieves higher accuracy than ALBEF trained with CC50% + COCO, which contains more than twice of image-text pairs in CC 10% + COCO. In Figure 4, while Codebook shows competitive recall performance compared to the MaskVLM with the 4M dataset ( 5.2M pairs), the R@1 differences in image and text retrieval, respectively, increase from 1.4 and 1.0 in the 4M dataset to 3.15 and 3.80 in CC 50% + COCO. Our model trained with CC25% + COCO outperforms Codebook trained with CC50% + COCO by 1.90 and 2.76 points in terms of image and text retrieval R@1, respectively. Since one of the main differences in MaskVLM compared to ALBEF and Codebook is the additional MIM, we believe that joint modeling of V+L contribute to better performance in limited data scenarios.
4.5 ABLATION STUDY
We perform an ablation study using different combinations of loss components to highlight the contribution of masked V+L modeling. We compare six models with the same architecture but with different loss functions for pre-training. We pre-train all models on the CC 50% + COCO dataset and compare finetuned and zero-shot retrieval performance on Flickr30k in Table 5. We note that zero-shot evaluation of the MLM + MIM model cannot be performed because the FC layers to compute ITM and ITC are not trained during pre-training. ITC and ITM are closely related to the retrieval task since they are used for finetuning and measuring the similarity between images and texts. However, MLM + MIM still achieves significantly better finetuned and zero-shot performance than ITC, which shows that MLM + MIM alone learns meaningful V+L representations. Compared to ITC+ ITM in the finetuned performance, ITC + ITM + MLM achieves slightly improved R@1
by 0.38 in image retrieval and degraded R@1 by 0.3 in text retrieval. When MIM alone is used with ITC + ITM as well, finetuned R@1 is improved by 0.16 and degraded by 0.8 for image and text retrieval, respectively, over ITC + ITM. On the other hand, when ITC + ITM + MLM + MIM is used, the model achieves significant improvement of finetuned performance over ITC + ITM + MLM by 0.92 and 2.10 for R@1 image and text retrieval, respectively. ITC + ITM + MLM + MIM also obtains the best performance in zero-shot retrieval. This result further supports the advantage of joint modeling for masked V+L signals.
4.6 QUALITATIVE RESULTS
We perform a qualitative analysis to show the role of multi-modal information in the reconstruction of masked signals from our model. To be specific, we illustrate the prediction of masked text tokens with and without the corresponding images. This illustration highlights how MaskVLM effectively utilizes both modality information to complete the masked signal modeling task. Figure 5 shows the reconstruction of masked texts using original images (“Recon (org)”) and masked images (“Recon (mask)”). In the first top example, when the model is given a masked text and a masked image which does not contain the “dog”, the reconstruction is performed by using only available information such as image patches of “green grass”. Thus, the prediction is limited to “a brown of motion” or “the green forest”. However, when the original image is used for reconstruction, both “a brown and white dog” and “white fence” are reconstructed by accurately attending to the image. In the bottom example, the visible patches of the masked image contain a few people, but lack background information. Consequently, the reconstruction with the masked image does not contain any background information but the background “cave” is reflected in the reconstruction with the original image. Theses examples confirm that MaskVLM has learned to perform masked signal modeling using both V+L information.
5 CONCLUSION
We propose masked vision and language modeling as a pre-training task for learning V+L representations. We provide a probabilistic interpretation to highlight the contribution of the proposed method and validate its advantages in both large-scale and limited data regimes. We consistently achieve the state-of-the-art performance in a broad range of V+L tasks.
ACKNOWLEDGMENTS
We thank Jiali Duan for providing results of the Codebook (Duan et al., 2022) with limited pretraining data in Figure 4.
A APPENDIX
A.1 DETAILS ON FINETUNING FOR DOWNSTREAM TASKS
We explain implementation details for each of the downstream tasks. For all the downstream tasks, we use AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05 and the cosine scheduler. An image size of 384 × 384 with RandAugment (Cubuk et al., 2020) is utilized and the positional encoding is interpolated following (Dosovitskiy et al., 2020). Except for the VQA task, we use the model achieves the best performance in the validation set to report the performance on the test set. We use the last epoch model for the VQA evaluation.
Image-Text Retrieval: COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015) are used to report the performance. To be specific, we follow data splits proposed in (Karpathy & Fei-Fei, 2015) and an average recall over image and text retrieval is used to find the best model in the validation set. The pre-trained model is finetuned for 15 epochs with a batch size of 256 and a learning rate of 1× 10−5. Visual Question Answering (VQA): For a fair comparison with existing methods (Chen et al., 2020b; Li et al., 2021), we use training and validation sets from VQA v2.0 (Goyal et al., 2017) with a subset of VQA samples from Visual Genome (Krishna et al., 2017) for training. Also, we report performance on both test-dev and test-std splits of VQA v2.0. Following (Li et al., 2021), we weight the loss for each answer based on its occurrence among all the answers. The model is finetuned for 15 epochs with a batch size of 256 . We use a learning rate of 2 × 10−5 for the image and the text cross-modality encoders, the fusion encoder, and the answer decoder. For the image and the text encoders, a learning rate of 1 × 10−5 is used. The fusion encoder and the answer decoder are initialized by the last and all three blocks of the pre-trained text cross-modality encoder, respectively.
Natural Language for Visual Reasoning (NLVR): Data splits proposed in (Suhr et al., 2018) are used for finetuning and evaluation. The model is finetuned for 5 epochs with a batch size of 128. Since the classifier is newly added after finetuning, we use a learning rate of 1 × 10−4 for the classifier and 1 × 10−5 for the remaining parts of the model. Different from (Duan et al., 2022; Li et al., 2021; Yang et al., 2022), where the models require additional text-assignment pre-training step before finetuning, we directly finetune for simplicity.
Visual Entailment (VE): We follow data splits proposed in SNLI-VE (Xie et al., 2019). We finetune the model with a batch size of 256 for 5 epochs. Similar to the NLVR task, a learning rate of 1× 10−4 is used for the classifier and 1× 10−5 is used for the remaining parts of the model.
A.2 REPRODUCIBILITY
We add more details of MaskVLM for reproducibility. We used the ImageNet pretrained ViT (vit base patch16 224) from (Wightman, 2019) and the pre-trained RoBERTa (roberta-base) from Hugging Face (Wolf et al., 2020). The detailed model architectures are visualized in Figure 7. Following (Dosovitskiy et al., 2020), the image encoder uses layer normalization (Ba et al., 2016) before each multi-head attention block while the text encoder applies layer normalization after each multi-head attention block (post norm). For the image (text) cross-modality encoder, we adopt the post norm and use the outputs of the text (image) encoder as keys and values
to compute cross-attention. To compute MIM and MLM, the self-attention outputs of the masked image features, vm, is used as queries and the unmasked text features, w, are used as keys and values in the image cross-modality encoder. For the text cross-modality encoder, the masked text features, wm, are used as queries and the unmasked image features, v, are used as keys and values. To keep the framework simple, we do not use any loss weighting for each loss term and layer decay during finetuning.
A.3 ABLATION STUDY ON MASKING STRATEGIES
We study different masking strategies in computing MIM and MLM losses. In particular, we compare MaskVLM using one modality masked and the other modality unmasked for reconstruction (MaskVLM (one)) and MaskVLM using both modalities masked at the same time for reconstruction. We compare these two MaskVLM models with the state-of-the-art method, ALBEF in Table 6. We follow the experimental setup described in Section 4.1 and report the finetuning performance on image-text retrieval. The performance of masking one modality at a time (MaskVLM (one)) was slightly better than masking both modalities at the same time (MaskVLM (both)). However, we observed that both reconstruction strategies are still effective as they achieve higher R@1 for image and text retrieval in both COCO and Flickr30k compared to ALBEF.
A.4 ABLATION STUDY ON MASKING RATIO
We perform ablation study using different masking ratios for masked vision and language modeling. In particular, we pre-train MaskVLM with several combinations of image and text masking ratios on the CC 50% + COCO dataset and report the finetuned image-text retrieval performance on Flickr30k in Table 7. We also report an average of R@k for image and text retrieval. When only image masking ratio is changed in the first three rows of the table, the difference between the maximum and the minimum of the average recall is 0.26 for image retrieval and 0.10 for text retrieval. This shows that MaskVLM achieves stable performance across the tested image masking ratios. From
comparison between the second row and the last row, we observe that increasing the text masking ratio from 0.15 to 0.3 leads to higher recall performance for both image and text retrieval.
A.5 EVALUATION ON IMAGE RECOGNITION
We evaluate the image recognition performance of MaskVLM. Following CLIP (Radford et al., 2021), we perform image classification directly using the pre-trained MaskVLM on various image recognition datasets including UC Merced Land Use (Yang & Newsam, 2010), MIT-67 (Quattoni & Torralba, 2009), CUB-200 (Wah et al., 2011), Oxford Flowers (Nilsback & Zisserman, 2008), Caltech-256 (Griffin et al., 2007), and ImageNet-1K (Deng et al., 2009). We compare the Top1 accuracy of MaskVLM with ALBEF (Li et al., 2021) in Table 8. Both models are pre-trained with the 4M dataset. Since during pre-training stage, both MaskVLM and ALBEF were initialized with the ImageNet pre-trained weights, the evaluation on ImageNet is not strictly zero-shot but the evaluation on other datasets is. We formulate image classification as image-to-text retrieval where the similarity scores between a query image and all the class names are computed to retrieve top-1 class name. The similarity scores can be obtained using either ITC and ITM scores, and we report them separately. Also, the results of using prompt engineering as in CLIP are reported.
As shown in Table 8, MaskVLM consistently outperforms ALBEF across all the datasets. In particular, prompt engineering improves the average accuracy across all the datasets for MaskVLM but ALBEF achieves lower average accuracy with prompt engineering. This shows that MaskVLM can better align images with variants of text than ALBEF, which results in stronger V+L representations of MaskVLM for the image recognition task.
A.6 ADDITIONAL EXAMPLES FOR THE QUALITATIVE ANALYSIS
We present additional examples for the qualitative analysis of MaskVLM in Figure 8. Similar to Figure 5, masked text tokens are reconstructed with masked images (“Recon (mask)”) and original images (“Recon (org)”). We highlight that MaskVLM utilizes both V+L information to reconstruct the text which corresponds to the given image.
A.7 STATISTICS OF THE PRE-TRAINING DATASET
In Table 9, we report the statistics of the 4M pre-training dataset that MaskVLM is trained on. We note that some data urls provided in the web datasets can become invalid, which may lead to slightly different number of image-text pairs depending on when the datasets are downloaded. | 1. What is the main contribution of the paper regarding multimodal vision and language mask models?
2. What are the strengths and weaknesses of the proposed approach compared to other baseline methods?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns regarding the integration of previous ideas, implementation details, and comparisons with related works?
5. What are some minor questions regarding the performance of the method on image recognition and computing cost? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper proposes a simple multimodal vision and language mask model for multimodal training, called MaskedVLM. The main idea is to leverage both masked image patches with full text description and masked language tokens with full image patches for semantic align. Additionally, the method employs two pretraining losses: image-text matching and CLIP-style image-text contrastive losses. After pretraining with 4M data, they evaluated the proposed MaskVLM on 4 standard multimodal downstream tasks: I2T, T2I, VQA, andMultimodal NLI. Experimental results look promising compared to other baseline methods and include ablation studies.
Strengths And Weaknesses
Strength
Both modality masking idea is simple and seems effective even if the idea is not very novel.
Experimental results seem promising with significant margins compared to the baseline methods.
Under limited pretraining data, this method show promising and competitive performances. This is meaninful for academic or small-scale industry research groups.
Overall, the paper is clear and easy to read.
Weakness
[Major] Even if the main idea is clear and simple, most of them are from other previous work such as MLM, MAE, and MIM. Of course, the novelty is not all. However, for technical contribution, the combination or integration of the previous ideas needs to be not trivial and challenging, considering the nature of ICLR. Also, the authors need to describe how to effectively integrate in details. Unfortunately, I could not find the details of their proposed method. For example, there is no detailed implementation on how to make joint integration for image decoder. Figure 2 did not show the details. Figure 3 has no architecture of image cross-modal decoder. If the page limitation is an issue, the author can use the appendix.
[Major] Some important related previous works are missed such as CoCa [Yu et al. 2022], SimVLM [Wang et al. 2022a], Florence [Yuan et al. 2021], and BEiT-3 [Wang et al. 2022b]. Among them, the author need to compare their method to SimVLM with discussion even if the pretraining dataset is not same. For the rest, the author need to introduce them in related work at least.
[Major] For limited dataset experiments, how is the pattern on more data? I wonder the performance of MaskVLM on larger data including CC12M. For example, ALBEF presented two versions such as 4M and 14M. I think the contribution of MaskVLM can be enhanced with the same setting of ALBEF-14M. That is, it will be meaninful under 14M setup for Figure 4.
[Minor] How is the performance on image recognition of this method, such as ImageNet-1k fine-tuning? Of course, image recongnition performance is out of scope of this paper but the comparable performance on image recongnition can improve the contribution of this paper.
[Minor] If I correctly understand, the training data are doubled due to the proposed mask approach. How is the performance on the same computing cost?
References
[Yu et al. 2022] CoCa: Contrastive Captioners are Image-Text Foundation Models. arXiv:2205.01917
[Wang et al. 2022a] SimVLM: Simple Visual Language Model Pretraining with Weak Supervision. ICLR 2022.
[Yuan et al. 2021] Florence: A New Foundation Model for Computer Vision. arXiv:2111.11432.
[Wang et al. 2022b] Image as a Foreign Language: BEiT Pretraining for All Vision and Vision-Language Tasks. arXiv:2208.10442.
Clarity, Quality, Novelty And Reproducibility
Overall, this paper is well-written and easy to follow. However, there are some points to be further clarified.
Figure 1 and its corresponding desciption in Introduction might lead to mislead. The proposed method relies on the random masking approach not considering explicit sematic prior. However, the examples look strongly-aligned masking. Therefore, the author need to clarify the description in Introduction.
The author argued that "This will potentially lead to biased performance in cross-modal retrieval tasks such as image-to-text or text-to-image retrieval." This argument need reference or prelimnary empirical analysis.
The author need to describe what ViT model is used for reproducibility. Also, I recommend writing reproducibilty section.
For novelty, please see the weakness. |
ICLR | Title
Masked Vision and Language Modeling for Multi-modal Representation Learning
Abstract
In this paper, we study how to use masked signal modeling in vision and language (V+L) representation learning. Instead of developing masked language modeling (MLM) and masked image modeling (MIM) independently, we propose to build joint masked vision and language modeling, where the masked signal of one modality is reconstructed with the help from another modality. This is motivated by the nature of image-text paired data that both of the image and the text convey almost the same information but in different formats. The masked signal reconstruction of one modality conditioned on another modality can also implicitly learn cross-modal alignment between language tokens and image patches. Our experiments on various V+L tasks show that the proposed method, along with common V+L alignment losses, achieves state-of-the-art performance in the regime of millions of pre-training data. Also, we outperforms the other competitors by a significant margin in limited data scenarios.
1 INTRODUCTION
Vision and language (V+L) representation learning has gained significant attention due to the transferablility of the representations to a diverse set of downstream tasks such as zero- or few-shot visual recognition (Jia et al., 2021; Radford et al., 2021; Tsimpoukelli et al., 2021), object detection (Cai et al., 2022; Kamath et al., 2021), information retrieval (Li et al., 2022; 2021), and multi-modal generation (Ramesh et al., 2022; 2021) etc. This success is mainly driven by large-scale pre-training with paired image and text data. The current V+L pre-training techniques particularly focus on the representation learning that characterizes the association between vision and language, and they are largely inspired by self-supervised learning techniques (Devlin et al., 2018; He et al., 2020) in uni-modal learning.
Masked signal modeling is a popular self-supervisory pre-training task (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022), which aims at reconstructing the masked signals from the unmasked ones. It has been independently explored in the domains of natural language processing (NLP) and computer vision (Devlin et al., 2018; Liu et al., 2019; Yang et al., 2019; Bao et al., 2021; Xie et al., 2021; He et al., 2022). For example, in the domain of NLP, BERT (Devlin et al., 2018) and several follow-up works (Liu et al., 2019; Yang et al., 2019) utilize masked language modeling (MLM) where the model is expected to predict the masked text tokens using unmasked tokens. They have shown that MLM leads to powerful generalization performance across diverse NLP tasks. In computer vision, as shown in Figure 1 (top-left), the masked image modeling (MIM) is to predict masked pixels or image patches using unmasked portions of the images. MIM has shown to be an effective pre-training task for learning visual representations (Bao et al., 2021; Xie et al., 2021; He et al., 2022).
While MLM and MIM have been actively explored in each domain, existing works do not fully leverage the masked multi-modal signal modeling in the domain of V+L. For example, as shown in Figure 1 (bottom-left), several approaches rely only on MLM with unmasked images and do not model the masked images (Duan et al., 2022; Li et al., 2022; 2021; 2019; Yang et al., 2022). In this case, the distribution of text given image, p(T |I), can be learned, but the distribution of image given text, P (I|T ), cannot. This will potentially lead to biased performance in cross-modal retrieval tasks such as image-to-text or text-to-image retrieval as shown in our experiments. Although there
exist works that use both modality signals masked, they either use a frozen object detector to extract region-based visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019) or mask the image tokens from a pre-trained image tokenizer instead of the raw RGB pixels (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022). These frozen object detector and image tokenizer not only require additional data for training but also prevent the V+L interactions from being learned end-to-end.
In this paper, we propose joint masked V+L modeling where the original signal is reconstructed by using its masked input and the corresponding unmasked input from the other modality. As illustrated in Figure 1 (right part), although we exploit random masking, the dog face in the image can be used to predict the masked text token “dog” and the text “green ball” can be used to reconstruct the corresponding masked patches in the image. To ensure that the model uses information from both modalities, we explicitly enforce the model to utilize cross-attention to generate the joint representations. Compared with the aforementioned existing works, our approach models both conditional distributions, p(I|T ) and p(T |I). Also, the model is trained end-to-end, without frozen bottleneck components that disturb learning interactions between V+L. By reconstructing one modality signal from the corresponding the other modality signal (e.g. reconstructing the text “dog” from the visual signals of dog face), the model implicitly learns the alignment between V+L. In addition, we observe that the model trained for the joint masked V+L modeling becomes noticeably effective when the training data is limited. Overall, our contributions are summarized as below:
1. We propose a joint masked V+L modeling task. We show that models pre-trained with the proposed task, along with common V+L alignment tasks such as image-text matching, achieves state-of-the-art performance on a broad rage of V+L tasks.
2. We provide a probabilistic interpretation of the proposed method and highlight the difference between ours and existing approaches in terms of the V+L joint distribution estimation.
3. We achieve significantly better performance than other V+L models in the regimes of limited training data, and only ∼40% of data used by the state-of-the-art models is sufficient to match their performance.
2 RELATED WORK
Vision and language representation learning The methods in V+L representation learning can be categorized based on how the information is fused between the modalities to obtain the joint representations. We group the fusion techniques into three categories: 1) transformers with attention across modalities, 2) contrastive learning with a large-scale pre-training data, 3) a hybrid form of learning with cross-attention and a contrastive loss. The attention across modalities has been widely used with image features extracted from off-the-shelf object detectors and text features obtained from transformer encoders (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Zhang et al., 2021; Li et al., 2020b; Su et al., 2019; Li et al., 2019). While cross-attention effectively aligns V+L representations, it is computationally expensive since all possible pairs of images
and texts need to be processed. On the contrary, the authors in (Jia et al., 2021; Radford et al., 2021; Mokady et al., 2021; Shen et al., 2021; Yuan et al., 2021) show that contrastive learning with uni-modal encoders and millions of image-text pairs can achieve powerful zero-shot performance in diverse V+L tasks. The contrastive learning-based approaches do not rely on computationally expensive cross-attention but require an excessively large amount of training data. Hence, a combination of contrastive loss and cross-attention is utilized by complementing limitations of both approaches in (Li et al., 2021; 2022; Yang et al., 2022; Duan et al., 2022). In particular, only image and text pairs that result in high similarity by uni-modal encoders are processed using the cross-attention layers to reduce the computational burden and improve the alignment.
Masked signal modeling is a commonly used pre-training objective in the aforementioned V+L models. It has been independently explored in each of vision and language domain. In the NLP domain, BERT and its variants (Devlin et al., 2018; Liu et al., 2019) achieve representations that can generalize to a broad range of NLP tasks through MLM. Autoregressive language models (Radford et al., 2018; 2019) which predict masked future tokens have shown to be effective self-supervised learners. The success of the language models leads to several MIM techniques. BeiT (Bao et al., 2021) is trained to recover masked visual tokens which are obtained by a discrete variational autoencoder (dVAE). In SimMIM (Xie et al., 2021) and MAE (He et al., 2022), transformers are trained to recover masked patches in an end-to-end fashion. The authors in (Chen et al., 2020a) propose to autoregressively predict the unknown pixels to learn visual representations. In (Bachmann et al., 2022), multiple vision modality data are masked and reconstructed to learn visual representations. In the domain of V+L learning, (Arici et al., 2021) explores MIM and MLM for catalog data with short text attributes. V+L models with an object detector often aim at recovering only bounding box visual features (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Tan & Bansal, 2019; Su et al., 2019). Several V+L models focus on predicting future text tokens without MIM (Wang et al., 2021; Yu et al., 2022; Alayrac et al., 2022). While both MIM and MLM are explored in (Geng et al., 2022), the trained model is evaluated only on vision tasks. In (Dou et al., 2022; Fu et al., 2021; Singh et al., 2022; Wang et al., 2022), image tokens defined by image tokenizers trained with additional 250 million images (Ramesh et al., 2021) or distillation from the CLIP model (Radford et al., 2021) trained with 400 million image-text pairs (Peng et al., 2022) are reconstructed. In our work, we eliminate these model and data dependencies, by directly recovering RGB pixels and text tokens from masked image patches and masked text tokens. Therefore, MIM and MLM are seamlessly integrated to achieve generalizable V+L representations within a simple training framework.
3 METHOD
Our method has two types of pre-training objectives, which are 1) masked vision and language modeling and 2) multi-modal alignment. We explain each pre-training objective in this section.
3.1 MASKED VISION AND LANGUAGE MODELING
The overall framework of masked vision and language modeling is shown in Figure 2. We use transformer-based encoders (Vaswani et al., 2017) for both image and text streams. Given an imagetext pair (I, T ), an image encoder, fim, is used to extract features, v = {vcls, v1, ..., vN}, from the image input I . N is the number of image patches and vcls is the encoding of the image class token, [CLS]. The text encoder, ftxt, extracts features, w = {wcls, w1, ..., wM}, from the text input, T . M is the number of text tokens and wcls is the encoding of the start token of a sentence, [START]. The image and the text encoder consist of 12 self-attention blocks as shown in Figure 3 (a). The image and the text features are further processed by image and text cross-modality encoders. The cross-modality encoders have 3 cross-attention blocks as illustrated in Figure 3 (b). The image (text) cross-modality encoder uses text (image) features to generate attentions. These cross-modality encoders can enhance the representation of one modality by interacting with another modality (Lu et al., 2020; Tan & Bansal, 2019).
Image and Text Masking: For text masking, we follow BERT (Devlin et al., 2018) with minor modifications. In BERT, the original tokens are replaced with either the [MASK] token or random tokens. We use only the [MASK] token to replace tokens to be masked (Wettig et al., 2022). For image masking, we follow (He et al., 2022; Xie et al., 2021) and use random masking of raw image patches with a masking patch size of 32× 32. Given that 224× 224 images are divided into 16× 16 patches for the image encoder, the large masking patch prevents the model from simply copying their neighborhood pixels for reconstruction (Xie et al., 2021).
Joint Reconstruction: We reconstruct the original signals of one modality from its masked input conditioned on the unmasked input of the other modality. Specifically, an original image, I , and a masked text, Tm, are used to reconstruct an original text, T , and similarly a masked image, Im, and an original text, T , are used to reconstruct the original image, I . For image reconstruction, (Im, T ) is first given to the image and the text encoders to obtain masked image features, vm, and unmasked text features, w. Following (Xie et al., 2021), we use both masked and unmasked patches to obtain vm. (vm,w) are further processed by the image cross-modality encoder, gim, where w is used to compute cross-attentions. The output of gim is mapped back to the original RGB image space by an image cross-modality decoder, gdeim, which consist of 3 cross-attention blocks followed by a fully connected layer (FC). Although existing work exploits a light-weight transformer decoder with only self-attention (He et al., 2022) or a simple linear mapping (Xie et al., 2021) for the image decoder, we use joint information between modalities to allow further interactions in decoding. For masked text reconstruction, a token classifier, gdetxt, which consists of a FC followed by softmax is applied to the output of the text cross-modality encoder, gtxt, for the token prediction. The masked V+L modeling loss, LMVLM , is defined as
LMVLM = E(I,T )∼D[H(yMT , ϕMtxt(I, Tm))︸ ︷︷ ︸ MLM + 1 Ω(IM ) ∥IM − ϕMim(Im, T )∥1︸ ︷︷ ︸
MIM
] , (1)
where ϕtxt = gdetxt(gtxt(fim(I), ftxt(Tm))) and ϕim = g de im(gim(fim(Im), ftxt(T ))). The loss is computed only for masked pixels and text tokens. Hence, the superscriptM denotes data or features correspond to the masked signals. A pair of I and T is sampled from the training dataset D, H denotes cross-entropy, and yMT is a matrix that contains one-hot row vectors for the ground truth of masked text tokens. Ω(·) is the number of pixels. When minimizing LMVLM , the model is enforced to reconstruct the original signals by attending to the other modality signals. Cross-attending for reconstruction enables the model to learn interaction between V+L modalities.
3.2 MULTI-MODAL ALIGNMENT
In addition to the masked signal modeling tasks, we adopt two additional tasks to explicitly learn multi-modality alignment. The first one is an image-text contrastive (ITC) learning (Radford et al., 2021; Jia et al., 2021). For the k-th pair of image and text features out of the image and text encoders, two separate FC layers are used to project the image [CLS] token features and the text [START] token features to the same dimensional feature space with unit norm, zkim and z k txt, respectively. The loss, LITC is computed as
LITC = − 1
N N∑ k=1
[ log
exp(zkim · zktxt/τ)∑N n=1 exp(z k im · zntxt/τ) + log exp(zkim · zktxt/τ)∑N n=1 exp(z n im · zktxt/τ)
] , (2)
where N and τ are the batch size and the temperature scaling parameter, respectively. The second task is an image-text matching (ITM) (Chen et al., 2020b; Li et al., 2021; 2020b), predicting whether an image and a text are aligned or not. The [CLS] and [START] token features from the image and text cross-modality encoders are zcrossim and z cross txt , respectively. To fuse these two features, we compute the element-wise product of zcrossim and z cross txt (z cross im ∗ zcrosstxt ), and a FC layer followed by softmax is applied to obtain the final prediction (Lu et al., 2019). For training, we use yITM = 1, when zcrossim and z cross txt are a pair. Otherwise, yITM = 0. The loss, LITM , is defined as
LITM = E(I,T )∼D[H(yITM , gitm(zcrossim ∗ zcrosstxt ))]. (3) Following (Li et al., 2021), we sample in-batch hard negatives based on the distribution of the cosine similarity between zim and ztxt. The overall pre-training loss, L, is defined as L = LMVLM + LITC+LITM .We term our model trained with loss L as MaskVLM (Masked Vision and Language Modeling).
3.3 PROBABILISTIC INTERPRETATION
We differentiate MaskVLM from the existing V+L models using masked signal modeling from a perspective of likelihood estimation. The training objective of masked signal modeling on unimodal signals, X , focuses on learning the data distribution p(X) which is formulated by the law of total probability as p(X) = ∑ Xm∈MX p(Xm) · p(X|Xm), where Xm is an instance of masked signal from the set of all possible masked signals, MX . MIM or MLM learns the data distribution by maximizing ∑ Xm∈MX p(X|Xm) (Bengio et al., 2013).
In V+L representation learning, the ultimate goal is to learn the joint distribution of multi-modal signals, p(I, T ). However, the authors in (Sohn et al., 2014) pointed out that directly maximizing the likelihood for the joint distribution is challenging because of the heterogeneous multimodal data distributions. Instead, they show minimizing variation of information defined as −E(I,T )∼D(log p(I|T ) + log p(T |I)) is sufficient to estimate the joint distribution. From a perspective of variation of information, the limitations in existing works can be better understood. Several existing works attempted to approximate the joint distribution using MLM with unmasked image (Duan et al., 2022; Li et al., 2021; 2019; Yang et al., 2022). In other words, p(T |I, Tm) is maximized to learn the conditional distribution, p(T |I), but p(I|T ) is not modeled. In other existing works (Chen et al., 2020b; Li et al., 2020a; Lu et al., 2020; Su et al., 2019; Tan & Bansal, 2019), where both modalities are masked, the visual masking is limited to mask the visual features extracted from a frozen object detector, ψ(·), instead of the raw image pixels. In this case, the distributions p(ψ(I)|T ) and p(T |ψ(I)) are modeled instead of p(I|T ) and p(T |I). This frozen feature extractor can bottleneck the direct estimation of the underlying data distribution. MaskVLM is trained endto-end to estimate both conditional distributions, p(I|T ) and p(T |I), which directly minimizes the variation of information. We hypothesize this modeling of conditional distributions for both modalities could lead to superior performance in both large-scale and limited data training scenarios, which we empirically demonstrated in Section 4.
4 EXPERIMENTS
4.1 PRE-TRAINING DATASETS AND DOWNSTREAM TASKS
We use the union of four datasets for pre-training so that we can perform a fair comparison with existing state-of-the-art methods (Chen et al., 2020b; Li et al., 2021). These datasets are Conceptual Captions (CC) (Sharma et al., 2018), SBU Captions (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2017), and COCO Captions (Lin et al., 2014). While VG and COCO contain captions annotated by humans, CC and SBU Captions are automatically collected from the web. The total number of unique images and image-text pairs in the four datasets are 4.1M and 5.2M, respectively. We term this pre-training dataset as the 4M dataset. We validate the pre-trained model on the following four downstream tasks:
Image-Text Retrieval: We perform text-to-image and image-to-text retrieval. We use the ITC and ITM losses of Section 3.2 for finetuning and the finetuned models are evaluated on COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015). In addition, since COCO is used for pre-training, zero-shot retrieval performance is reported on Flickr30k. In (Li et al., 2021), the model finetuned on COCO is used for the zero-shot evaluation on Flickr30k. Although it may result in better performance, we believe that using the finetuned model does not validate the zero-shot capability of the
pre-trained model. Therefore, we use the pre-trained model directly for zero-shot evaluation. Following (Li et al., 2021), we first retrieve top-k candidates using the similarity scores from the image and the text encoders. The top-k candidates are further processed by the cross-modality encoders to obtain the final retrieval results.
Visual Question Answering (VQA): Here, given an image and a question pair, the model should generate a correct answer. The model is evaluated on VQA v2 (Goyal et al., 2017). We adopt the answer generation framework (Cho et al., 2021) and finetune the base model with a fusion encoder and an answer decoder. The model architectures are visualized in Figure 6 (a) of Appendix. The fusion encoder consists of one cross-attention block shown in Figure 3 (b). The output from the text cross-modality encoder is used as queries, and the image cross-modality encoder output is utilized to create attentions in the fusion encoder. The architecture of the answer decoder is the same as that of the text cross-modality encoder, but it is trained with a language modeling loss to generate the answers. Specifically, the output of the fusion encoder is used for computing attentions and the answer tokens are autoregressively predicted. During inference, [START] token is used as an initial token to generate following answer tokens.
Natural Language for Visual Reasoning (NLVR): This tasks involves a binary classification with a triplet, (text, image1, image2). The goal here is to predict whether the text describes the pair of images. For finetuning, we feedforward (text, image1) and (text, image2) separately to extract the features as shown in Figure 6 (b). The [CLS] token features of image1 and image2 from the image encoder are denoted as v1 and v2, respectively. The [START] token text features from the text encoder is w. These features are processed by the cross-modality encoders. The outputs of the image and text cross-modality encoders are fused by element-wise multiplication. The fused features for both images are concatenated, and a classifier with two linear layers predicts whether the text is aligned with the image pair or not. NLVR2 (Suhr et al., 2018) is used for the evaluation.
Visual Entailment (VE): Given an image text pair, the task is to classify the relationship between the image and the text into one of three categories: entailment, neutral, and contradictory. The element-wise product of the output from the image and the text cross-modality encoders is forwarded to a classifier of two linear layers for prediction. SNLI-VE (Xie et al., 2019) is used for evaluation.
4.2 IMPLEMENTATION DETAILS
We use a Visual Transformer (ViT) (Dosovitskiy et al., 2020) pre-trained on ImageNet (Deng et al., 2009) and a pre-trained RoBERTa from (Liu et al., 2019) to initialize the image and the text encoder, respectively. We pre-train the model for 50 epochs when the 4M dataset is used and 30 epochs for all other experiments. A batch size of 512 is used with 16 NVIDIA Tesla V100 GPUs. All parameters are optimized using AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05. Following (Xie et al., 2021), we use the image masking ratio of 60%. While 15% masking ratio is used for text in language models (Devlin et al., 2018; Liu et al., 2019), we use 30% since the paired image can provide additional information for text reconstruction. During pre-training, the learning rate is warmed up to 3×10−4 in the first 5 epochs and decayed to 3×10−5 using a cosine scheduler. The learning rates for the image encoder and the text encoder are set to 10−5, which is less than that of the cross-modality encoders. An image size of 224 × 224 and RandAugment (Cubuk et al.,
2020) are used. During finetuning, the image is resized to 384× 384 and the positional encoding is interpolated following (Dosovitskiy et al., 2020). More details can be found in Appendix.
4.3 EVALUATION ON IMAGE-TEXT RETRIEVAL, VQA, NLVR, AND VE
We compare the finetuned image-text retrieval performance of the proposed MaskVLM with the state-of-the-art methods in Table 1. The second column is the number of unique images used for pretraining and the retrieval performance is evaluated in terms of Recall@k (R@k). We do not directly compare with ALIGN (Jia et al., 2021) since it is trained with more than 300 times of data used for MaskVLM. However, we still highlight the small performance gap between MaskVLM and ALIGN. We achieve the best performance in all Recall@k metrics on both COCO and Flickr30k except for the image retrieval R@10 and text retrieval R@5 on Flickr30k. Compared to ALIGN, we even achieve higher R@1 for image retrieval on COCO and text retrieval on Flickr30k. Table 2 shows the zero-shot retrieval performance of the state-of-the-art methods on Flickr30k. MaskVLM achieves a significant improvement over the second best method, ALBEF (Li et al., 2021), by 6.8 points at R@1 for image retrieval. Given that ALBEF is not trained for MIM, we hypothesize that ALBEF achieves the biased performance for text retrieval and MaskVLM achieves the significant improvement in image retrieval by additional MIM which models p(I|T ). While FLAVA exploits both MLM and MIM with the pre-trained image tokenizer, using 13 times more data than MaskVLM, MaskVLM still outperforms FLAVA by 9.8 and 19.3 points at R@1 for image and text retrieval respectively. Compared with CLIP (Radford et al., 2021) which is trained with at least 76 times more data than MaskVLM, we still achieve higher R@1 for image retrieval by 6.3 points. In general, MaskVLM achieves state-of-the-art performance in both finetuning and zero-shot experiments.
We report the accuracies on VQA, NLVR, and VE in Table 3. Except for SimVLM whose pretraining data is more than 300 times larger than that of MaskVLM, we consistently achieve the best
performances in all these tasks except for the validation split of NLVR2. In particular, MaskVLM is better than the second best method by 0.43, 1.14, and 0.27 on the test splits of VQA, NLVR2, and SNLI-VE, respectively. Compared to the base model of SimVLM, we narrow the accuracy gaps to 2.74% and 3.48% in test-std and test splits of VQA and VE, respectively. MaskVLM achieves higher accuracy than SimVLMbase in the test split of NLVR2 by 0.21%.
4.4 EVALUATION WITH LIMITED PRE-TRAINING DATA
We highlight the performance of MaskVLM in limited data scenarios. In particular, we create three subsets of the 4M pre-training data by sampling 50%, 25%, and 10% of CC and combining them with COCO. The number of imagetext pairs in each subset is around 39%, 25%, and 16% of the 4M pre-training data which contain 5.2M pairs, respectively. We pre-train models with these subsets of the data and analyze the downstream task performance in comparison with state-of-the-art methods. The results are reported in Table 4. Particularly, image and text retrieval R@1 performance on COCO is also visualized in Figure 4. We compare MaskVLM with
the most recent state-of-the-art methods, which are ALBEF (Li et al., 2021) and Codebook (Duan et al., 2022). In Table 4, as the size of pre-training data becomes smaller from CC 50% + COCO to CC 10% + COCO, the performance gap between MaskVLM and ALBEF increases from 6.39 to 8.71 at R@1 in COCO image retrieval (IR), 7.46 to 9.04 at R@1 in COCO text retrieval (TR), 1.17 to 1.79 in VQA and 0.31 to 1.24 in the test set of SNLI-VE. In NLVR2 and VQA, MaskVLM trained with CC 10% + COCO achieves higher accuracy than ALBEF trained with CC50% + COCO, which contains more than twice of image-text pairs in CC 10% + COCO. In Figure 4, while Codebook shows competitive recall performance compared to the MaskVLM with the 4M dataset ( 5.2M pairs), the R@1 differences in image and text retrieval, respectively, increase from 1.4 and 1.0 in the 4M dataset to 3.15 and 3.80 in CC 50% + COCO. Our model trained with CC25% + COCO outperforms Codebook trained with CC50% + COCO by 1.90 and 2.76 points in terms of image and text retrieval R@1, respectively. Since one of the main differences in MaskVLM compared to ALBEF and Codebook is the additional MIM, we believe that joint modeling of V+L contribute to better performance in limited data scenarios.
4.5 ABLATION STUDY
We perform an ablation study using different combinations of loss components to highlight the contribution of masked V+L modeling. We compare six models with the same architecture but with different loss functions for pre-training. We pre-train all models on the CC 50% + COCO dataset and compare finetuned and zero-shot retrieval performance on Flickr30k in Table 5. We note that zero-shot evaluation of the MLM + MIM model cannot be performed because the FC layers to compute ITM and ITC are not trained during pre-training. ITC and ITM are closely related to the retrieval task since they are used for finetuning and measuring the similarity between images and texts. However, MLM + MIM still achieves significantly better finetuned and zero-shot performance than ITC, which shows that MLM + MIM alone learns meaningful V+L representations. Compared to ITC+ ITM in the finetuned performance, ITC + ITM + MLM achieves slightly improved R@1
by 0.38 in image retrieval and degraded R@1 by 0.3 in text retrieval. When MIM alone is used with ITC + ITM as well, finetuned R@1 is improved by 0.16 and degraded by 0.8 for image and text retrieval, respectively, over ITC + ITM. On the other hand, when ITC + ITM + MLM + MIM is used, the model achieves significant improvement of finetuned performance over ITC + ITM + MLM by 0.92 and 2.10 for R@1 image and text retrieval, respectively. ITC + ITM + MLM + MIM also obtains the best performance in zero-shot retrieval. This result further supports the advantage of joint modeling for masked V+L signals.
4.6 QUALITATIVE RESULTS
We perform a qualitative analysis to show the role of multi-modal information in the reconstruction of masked signals from our model. To be specific, we illustrate the prediction of masked text tokens with and without the corresponding images. This illustration highlights how MaskVLM effectively utilizes both modality information to complete the masked signal modeling task. Figure 5 shows the reconstruction of masked texts using original images (“Recon (org)”) and masked images (“Recon (mask)”). In the first top example, when the model is given a masked text and a masked image which does not contain the “dog”, the reconstruction is performed by using only available information such as image patches of “green grass”. Thus, the prediction is limited to “a brown of motion” or “the green forest”. However, when the original image is used for reconstruction, both “a brown and white dog” and “white fence” are reconstructed by accurately attending to the image. In the bottom example, the visible patches of the masked image contain a few people, but lack background information. Consequently, the reconstruction with the masked image does not contain any background information but the background “cave” is reflected in the reconstruction with the original image. Theses examples confirm that MaskVLM has learned to perform masked signal modeling using both V+L information.
5 CONCLUSION
We propose masked vision and language modeling as a pre-training task for learning V+L representations. We provide a probabilistic interpretation to highlight the contribution of the proposed method and validate its advantages in both large-scale and limited data regimes. We consistently achieve the state-of-the-art performance in a broad range of V+L tasks.
ACKNOWLEDGMENTS
We thank Jiali Duan for providing results of the Codebook (Duan et al., 2022) with limited pretraining data in Figure 4.
A APPENDIX
A.1 DETAILS ON FINETUNING FOR DOWNSTREAM TASKS
We explain implementation details for each of the downstream tasks. For all the downstream tasks, we use AdamW (Loshchilov & Hutter, 2017) with a weight decay of 0.05 and the cosine scheduler. An image size of 384 × 384 with RandAugment (Cubuk et al., 2020) is utilized and the positional encoding is interpolated following (Dosovitskiy et al., 2020). Except for the VQA task, we use the model achieves the best performance in the validation set to report the performance on the test set. We use the last epoch model for the VQA evaluation.
Image-Text Retrieval: COCO (Lin et al., 2014) and Flickr30k (Plummer et al., 2015) are used to report the performance. To be specific, we follow data splits proposed in (Karpathy & Fei-Fei, 2015) and an average recall over image and text retrieval is used to find the best model in the validation set. The pre-trained model is finetuned for 15 epochs with a batch size of 256 and a learning rate of 1× 10−5. Visual Question Answering (VQA): For a fair comparison with existing methods (Chen et al., 2020b; Li et al., 2021), we use training and validation sets from VQA v2.0 (Goyal et al., 2017) with a subset of VQA samples from Visual Genome (Krishna et al., 2017) for training. Also, we report performance on both test-dev and test-std splits of VQA v2.0. Following (Li et al., 2021), we weight the loss for each answer based on its occurrence among all the answers. The model is finetuned for 15 epochs with a batch size of 256 . We use a learning rate of 2 × 10−5 for the image and the text cross-modality encoders, the fusion encoder, and the answer decoder. For the image and the text encoders, a learning rate of 1 × 10−5 is used. The fusion encoder and the answer decoder are initialized by the last and all three blocks of the pre-trained text cross-modality encoder, respectively.
Natural Language for Visual Reasoning (NLVR): Data splits proposed in (Suhr et al., 2018) are used for finetuning and evaluation. The model is finetuned for 5 epochs with a batch size of 128. Since the classifier is newly added after finetuning, we use a learning rate of 1 × 10−4 for the classifier and 1 × 10−5 for the remaining parts of the model. Different from (Duan et al., 2022; Li et al., 2021; Yang et al., 2022), where the models require additional text-assignment pre-training step before finetuning, we directly finetune for simplicity.
Visual Entailment (VE): We follow data splits proposed in SNLI-VE (Xie et al., 2019). We finetune the model with a batch size of 256 for 5 epochs. Similar to the NLVR task, a learning rate of 1× 10−4 is used for the classifier and 1× 10−5 is used for the remaining parts of the model.
A.2 REPRODUCIBILITY
We add more details of MaskVLM for reproducibility. We used the ImageNet pretrained ViT (vit base patch16 224) from (Wightman, 2019) and the pre-trained RoBERTa (roberta-base) from Hugging Face (Wolf et al., 2020). The detailed model architectures are visualized in Figure 7. Following (Dosovitskiy et al., 2020), the image encoder uses layer normalization (Ba et al., 2016) before each multi-head attention block while the text encoder applies layer normalization after each multi-head attention block (post norm). For the image (text) cross-modality encoder, we adopt the post norm and use the outputs of the text (image) encoder as keys and values
to compute cross-attention. To compute MIM and MLM, the self-attention outputs of the masked image features, vm, is used as queries and the unmasked text features, w, are used as keys and values in the image cross-modality encoder. For the text cross-modality encoder, the masked text features, wm, are used as queries and the unmasked image features, v, are used as keys and values. To keep the framework simple, we do not use any loss weighting for each loss term and layer decay during finetuning.
A.3 ABLATION STUDY ON MASKING STRATEGIES
We study different masking strategies in computing MIM and MLM losses. In particular, we compare MaskVLM using one modality masked and the other modality unmasked for reconstruction (MaskVLM (one)) and MaskVLM using both modalities masked at the same time for reconstruction. We compare these two MaskVLM models with the state-of-the-art method, ALBEF in Table 6. We follow the experimental setup described in Section 4.1 and report the finetuning performance on image-text retrieval. The performance of masking one modality at a time (MaskVLM (one)) was slightly better than masking both modalities at the same time (MaskVLM (both)). However, we observed that both reconstruction strategies are still effective as they achieve higher R@1 for image and text retrieval in both COCO and Flickr30k compared to ALBEF.
A.4 ABLATION STUDY ON MASKING RATIO
We perform ablation study using different masking ratios for masked vision and language modeling. In particular, we pre-train MaskVLM with several combinations of image and text masking ratios on the CC 50% + COCO dataset and report the finetuned image-text retrieval performance on Flickr30k in Table 7. We also report an average of R@k for image and text retrieval. When only image masking ratio is changed in the first three rows of the table, the difference between the maximum and the minimum of the average recall is 0.26 for image retrieval and 0.10 for text retrieval. This shows that MaskVLM achieves stable performance across the tested image masking ratios. From
comparison between the second row and the last row, we observe that increasing the text masking ratio from 0.15 to 0.3 leads to higher recall performance for both image and text retrieval.
A.5 EVALUATION ON IMAGE RECOGNITION
We evaluate the image recognition performance of MaskVLM. Following CLIP (Radford et al., 2021), we perform image classification directly using the pre-trained MaskVLM on various image recognition datasets including UC Merced Land Use (Yang & Newsam, 2010), MIT-67 (Quattoni & Torralba, 2009), CUB-200 (Wah et al., 2011), Oxford Flowers (Nilsback & Zisserman, 2008), Caltech-256 (Griffin et al., 2007), and ImageNet-1K (Deng et al., 2009). We compare the Top1 accuracy of MaskVLM with ALBEF (Li et al., 2021) in Table 8. Both models are pre-trained with the 4M dataset. Since during pre-training stage, both MaskVLM and ALBEF were initialized with the ImageNet pre-trained weights, the evaluation on ImageNet is not strictly zero-shot but the evaluation on other datasets is. We formulate image classification as image-to-text retrieval where the similarity scores between a query image and all the class names are computed to retrieve top-1 class name. The similarity scores can be obtained using either ITC and ITM scores, and we report them separately. Also, the results of using prompt engineering as in CLIP are reported.
As shown in Table 8, MaskVLM consistently outperforms ALBEF across all the datasets. In particular, prompt engineering improves the average accuracy across all the datasets for MaskVLM but ALBEF achieves lower average accuracy with prompt engineering. This shows that MaskVLM can better align images with variants of text than ALBEF, which results in stronger V+L representations of MaskVLM for the image recognition task.
A.6 ADDITIONAL EXAMPLES FOR THE QUALITATIVE ANALYSIS
We present additional examples for the qualitative analysis of MaskVLM in Figure 8. Similar to Figure 5, masked text tokens are reconstructed with masked images (“Recon (mask)”) and original images (“Recon (org)”). We highlight that MaskVLM utilizes both V+L information to reconstruct the text which corresponds to the given image.
A.7 STATISTICS OF THE PRE-TRAINING DATASET
In Table 9, we report the statistics of the 4M pre-training dataset that MaskVLM is trained on. We note that some data urls provided in the web datasets can become invalid, which may lead to slightly different number of image-text pairs depending on when the datasets are downloaded. | 1. What is the main contribution of the paper regarding vision-language models?
2. What are the strengths and weaknesses of the proposed approach, particularly in its simplicity, novelty, and performance?
3. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content?
4. Are there any concerns or suggestions regarding the comparisons with other works and the lack of error bars in the experiments? | Summary Of The Paper
Strengths And Weaknesses
Clarity, Quality, Novelty And Reproducibility | Summary Of The Paper
This paper considers training vision-language models by masked reconstruction. Specifically, the model is trained to reconstruct a masked image conditioned on the paired text, and vice versa. In addition to this task, the standard contrastive and ITM tasks are added during pretraining. The authors use public small-scale VL datasets for pretraining and beat baseline methods on retrieval and VQA/NLVR-type tasks. The authors also show that the proposed method works well in low-data regimes and that all tasks contribute to the final performance.
Strengths And Weaknesses
Strengths:
The proposed method is simple and intuitive.
The paper relies on open-source datasets which significantly improves reproducibility.
The paper shows strong improvements over baseline methods.
Weaknesses:
There are multiple missing references. The novelty of the proposed method should be discussed in light of these.
Given that there are missing references, the baselines in the tables might need to be revised. Thus, it is not clear how well the model performs compared to SOTA methods.
Error bars are missing.
Clarity, Quality, Novelty And Reproducibility
The paper is well-written and clear. The method is simple which I think is a strength, and this additionally makes the exposition clear.
The paper relies on public datasets which aid in reproducibility – I’d like to ask if the authors plan to open-source the code. One potential issue with reproducibility is the lack of error bars in the comparisons. Could error bars be added at least for the fine-tuning tasks? This would ensure that the improvements are not statistical outliers.
A significant issue with the paper is that several closely related references are missing. Here are a few:
https://arxiv.org/abs/2208.10442
https://arxiv.org/abs/2205.14204
https://arxiv.org/abs/2204.01678
The novelty of the proposed method should be discussed in light of these previous papers which also focus on masked training for VL tasks. Some of these methods should also be added to the tables to ensure that the method is compared to the latest work. |
ICLR | Title
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Abstract
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
N/A
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
1 INTRODUCTION
Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards (Sutton & Barto, 2018; Rahmandad et al., 2009; Luoma et al., 2017). For delayed rewards, temporal difference (TD) suffers from vanishing information (Arjona-Medina et al., 2019). On the other hand Monte Carlo (MC) has high variance since it must average over all possible futures (ArjonaMedina et al., 2019). Monte-Carlo Tree Search (MCTS), used for Go and chess, can handle delayed and rare rewards since it has a perfect environment model (Silver et al., 2016; 2017). RUDDER (Arjona-Medina et al., 2019; 2018) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given. RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network. However, for complex tasks, current exploration strategies find episodes with high rewards only after an incommensurate long time. Humans and animals obtain high reward episodes by teachers, role models, or prototypes. Along this line, we assume that episodes with high rewards are given as demonstrations. Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies, typically, only a few demonstrations are available. However, RUDDER’s LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997a) as a deep learning method requires many examples for learning. Therefore, we introduce Align-RUDDER, which replaces RUDDER’s LSTM with a profile model obtained from multiple sequence alignment (MSA) of the demonstrations. Profile models are well known in bioinformatics. They are used to score new sequences according to their sequence similarity to the aligned sequences. Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model—, which considerably speeds up learning even if only a few demonstrations are available.
Our main contributions are:
• We suggest a reinforcement algorithm that works well for sparse and delayed rewards, where standard exploration fails but few demonstrations with high rewards are available.
• We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations.
• We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning.
2 REVIEW OF RUDDER
Basic insight: Q-functions for complex tasks are step functions. Complex tasks are typically composed of sub-tasks. Therefore the Q-function of an optimal policy resembles a step function. The Q-function is the expected future return and it increases (i.e, makes a step) when a sub-task is completed. Identifying large steps in the Q-function speeds up learning since it allows (i) to increase the return by performing actions that cause the step and (ii) to sample episodes with a larger return for learning.
An approximation to the Q-function must predict the expected future return for every state-action pair. However, a Q-function that resembles a step-function is mostly constant. Therefore predictions are only necessary at the steps. We have to identify the relevant state-actions that cause the steps and then predict the size of the steps. An LSTM network (Hochreiter, 1991; Hochreiter & Schmidhuber, 1995; 1997a;b) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells. Consequently, LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed. Therefore, both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode.
Reward Redistribution. We consider episodic Markov decision processes (MDPs), i.e., the reward is only given once at the end of the sequence. The Q-function is assumed to be a step function, that is, the task can be decomposed into sub-tasks (see previous paragraph). Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward. Since the Q-function of an optimal policy is not known, we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work. The differences in predictions determine the reward redistribution. The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most. Fortunately, just identifying the largest steps even with poor predictions speeds up learning considerably. See Figure 1 for a description of the reward redistribution.
Learning methods based on reward redistribution. The redistributed reward serves as reward for a subsequent learning method: (A) The Q-values can be directly estimated (Arjona-Medina et al., 2019), which is used in the experiments for the artificial tasks and BC pre-training for MineCraft. (B) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization (PPO) (Schulman et al., 2018), which is used in the MineCraft experiments. (C) Redistributed rewards can serve for temporal difference learning like Q-learning (Watkins, 1989).
LSTM models for reward redistribution. RUDDER uses an LSTM model for predicting the future return. The reward redistribution is the difference between two subsequent predictions. If a stateaction pair increases the prediction of the return, then it is immediately rewarded. Using state-action sub-sequences (s, a)0:t = (s0, a0, . . . , st, at), the redistributed reward is Rt+1 = g((s, a)0:t) − g((s, a)0:t−1), where g is an LSTM model that predicts the return of the episode. The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most.
3 ALIGN-RUDDER: RUDDER WITH FEW DEMONSTRATIONS
In bioinformatics, sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship (Needleman & Wunsch, 1970; Smith & Waterman, 1981). The result of the alignment of multiple sequences is a profile model. The profile model is a consensus sequence, a frequency matrix, or a Position-Specific Scoring Matrix (PSSM) (Stormo et al., 1982). New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model.
Align-RUDDER uses such alignment techniques to align two or more high return demonstrations. For the alignment, we assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other analog to being evolutionary related. If the agent generates a state-action sequence (s, a)0:t−1, then this sequence is aligned to the profile model g giving a score g((s, a)0:t−1). The next action of the agent extends the state-action sequence by one state-action pair (st, at). The extended sequence (s, a)0:t is also aligned to the profile model g giving another score g((s, a)0:t). The redistributed reward Rt+1 is the difference of these scores: Rt+1 = g((s, a)0:t)− g((s, a)0:t−1) (see Eq. (1)). This difference indicates how much of the return is gained or lost by a adding another sequence element.
Align-RUDDER scores how close an agent follows an underlying strategy, which has been extracted by the profile model. Similar to the LSTM model, we identify the largest steps in the Q-function via relevant events determined by the profile model. Therefore, redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees. RUDDER’s theory for reward redistribution is valid for LSTM, other recurrent networks, attention mechanisms, or sequence and profile models.
Advantages of alignment compared to LSTM. Learning an LSTM model is severely limited when very few demonstrations are available. First, LSTM is known to require a large number of samples to generalize to new sequences. In contrast, sequence alignment requires only two examples to generalize well as known from bioinformatics. Second, expert demonstrations have high rewards.
Therefore random demonstrations with very low rewards have to be generated. LSTM does not generalize well when only these extreme reward cases can be observed in the training set. In contrast, sequence alignment only uses examples that are closely related; that is, they belong to the same category (expert demonstrations).
Reward Redistribution by Sequence Alignment. The new reward redistribution approach consists of five steps, see Fig. 3: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model like a PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1). In the following, the five steps of Align-RUDDER’s reward redistribution are outlined. For the interested reader, each step is detailed in Sec. A.3 in the appendix. Finally, in Sec. A.7.3 in the appendix, we illustrate these five steps on the example of Minecraft.
(I) Defining Events. Instead of states, we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal. An event is defined as a cluster of state differences. We use similarity-based clustering like affinity propagation (AP) (Frey & Dueck, 2007). If states are only enumerated, we suggest to use the “successor representation” (Dayan, 1993) or “successor features” (Barreto et al., 2017). We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation.
A sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e (the event) and ignoring the actions. Alignment techniques from bioinformatics assume sequences composed of a few events, e.g. 20 events. If there are too many events, good fitting alignments cannot be distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978).
(II) Determining the Alignment Scoring System. A scoring matrix S with entries si,j determines the score for aligning event i with j. A priori, we only know that a relevant event should be aligned to itself but not to other events. Therefore, we set si,j = 1/pi for i = j and si,j = α for i 6= j. Here, pi is the relative frequency of event i in the demonstrations. α is a hyper-parameter, which is typically a small negative number. This scoring scheme encourages alignment of rare events, for which pi is small. For more details see Appendix Sec. A.3.
(III) Multiple sequence alignment (MSA). An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i,j,i<j ∑L t=0 si,j,ti,tj ,t in an alignment, where si,j,ti,tj ,t is the score at alignment
column t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length, since gaps make the alignment longer than the length of each sequence. We use ClustalW (Thompson et al., 1994) for MSA. MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations. This guiding tree allows to identify multiple strategies. For more details see Appendix Sec. A.3.
(IV) Position-Specific Scoring Matrix (PSSM) and MSA profile model. From the alignment, we construct a profile model as a) column-wise event probabilities and b) a PSSM (Stormo et al., 1982). The PSSM is a column-wise scoring matrix to align new sequences to the profile model. More details are given in Appendix Sec. A.3.
(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L l=0 sl,tl . Here, sl,tl is the alignment score for event etl at position l in the alignment. Alignment gaps are columns to which no event was aligned, which have tl = T + 1 with gap penalty sl,T+1. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is
Rt+1 = (S(τt)− S(τt−1)) C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1, (1)
where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] with S(τ−1) = 0. The original return of
the sequence τ is G̃0 = ∑T t=0 R̃t+1 and the expectation of the return over demonstrations is Edemo. The constant C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence (Arjona-Medina et al., 2019) by G0 = ∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past: Rt+1 = h((s, a)0:t).
Sub-tasks. The reward redistribution identifies sub-tasks as alignment positions with high redistributed rewards. These sub-tasks are indicated by high scores s in the PSSM. Reward redistribution also determines the terminal states of sub-tasks since it assigns rewards for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the redistributed reward is Markov. For redistributed Markov reward, options (Sutton et al., 1999), MAXQ (Dietterich, 2000), or recursive option composition (Silver & Ciosek, 2012) can be used.
Higher Order Markov Reward Redistributions. Align-RUDDER may lead to higher-order Markov redistribution. Corollary 1 in the appendix states that the optimality criterion from Theorem 2 in Arjona-Medina et al. (2019) also holds for higher-order Markov reward redistributions. If the expected redistributed higher-order Markov reward is the difference of Q-values. In that case the redistribution is optimal, and there is no delayed reward. Furthermore, the optimal policies are the same as for the original problem. This corollary is the motivation for redistributing the reward to the steps in the Q-function. In the Appendix, Corollary 2 states that under a condition, an optimal higher-order reward redistribution can be expressed as the difference of Q-values.
4 EXPERIMENTS
Align-RUDDER is compared on three artificial tasks with sparse & delayed rewards and few demonstrations to Behavioral Cloning with Q-learning (BC+Q), Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), RUDDER (LSTM), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018). GAIL (Ho & Ermon, 2016) failed to solve the two artificial tasks, as reported previously for similar tasks (Reddy et al., 2020). Then, we test Align-RUDDER on the complex MineCraft ObtainDiamond task with episodic rewards (Guss et al., 2019b). All experiments use finite time MDPs with γ = 1 and episodic reward. More details are in Appendix Sec. A.6.
Alignment vs LSTM in 1D key-chest environment. We use a 1D key-chest environment to show the effectiveness of sequence alignment in a low data regime compared to an LSTM model. The agent has to collect the key and then open the chest to get a positive reward at the last timestep. See Appendix Fig. A.9 for a schematic representation of the environment. As the key-events (important state-action pairs) in this environment are known, we can compute the key-event detection rate of
a reward redistribution model. A key event is detected if the redistributed reward of an important state-action pair is larger than the average redistributed reward in the sequence. We train the reward redistribution models with 2, 5, and 10 training episodes and test on 1000 test episodes, averaged over ten trials. Align-RUDDER significantly outperforms LSTM (RUDDER) for detecting these key events in all cases, with an average key-event detection rate of 0.96 for sequence alignment vs. 0.46 for the LSTM models overall dataset sizes. See Appendix Fig. A.10 for the detailed results.
Artificial tasks (I) and (II). They are variations of the gridworld rooms example (Sutton et al., 1999), where cells are the MDP states. In our setting, the states do not have to be time-aware for ensuring stationary optimal policies, but the unobserved used-up time introduces a random effect. The grid is divided into rooms. The agent’s goal is to reach a target from an initial state with the fewest steps. It has to cross different rooms, which are connected by doors, except for the first room, which is only connected to the second room by a teleportation portal. The portal is introduced to avoid BC initialization alone, solving the task. It enforces that going to the portal entry cells is learned when they are at positions not observed in demonstrations. At every location, the agent can move up, down, left, right. The state transitions are stochastic. An episode ends after T = 200 time steps. Suppose the agent arrives at the target. In that case, it goes into an absorbing state where it stays until T = 200 without receiving further rewards. The reward is only given at the end of the episode. Demonstrations are generated by an optimal policy with a 0.2 exploration rate.
The five steps of Align-RUDDER’s reward redistribution are: (1) Events are clusters of states obtained by Affinity Propagation using as similarity the successor representation based on demonstrations. (2) The scoring matrix is obtained according to (II), using = 0 and setting all off-diagonal values of the scoring matrix to −1. (3) ClustalW is used for the MSA of the demonstrations with zero gap penalties and no biological options. (4) The MSA supplies a profile model and a PSSM as in (IV). (5) Sequences generated by the agent are mapped to sequences of events according to (I). The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. The reward redistribution determines sub-tasks like doors or portal arrival. The sub-tasks partition the Q-table into sub-tables that represent a sub-agent. However, we optimize a single Q-table in these experiments. Defining sub-tasks has no effect on learning in the tabular case.
All compared methods learn a Q-table and use an -greedy policy with = 0.2. The Q-table is initialized by behavioral cloning (BC). The state-action pairs which are not initialized since they are not visited in the demonstrations get an initialization by drawing a sample from a normal distribution. Align-RUDDER learns the Q-table via RUDDER’s Q-value estimation (learning method (A) from
Sec.2). For BC+Q, RUDDER (LSTM), SQIL, and DQfD a Q-table is learned by Q-learning. Hyperparameters are selected via grid search using the same amount of time for each method. For different numbers of demonstrations, performance is measured by the number of episodes to achieve 80% of the average return of the demonstrations. A Wilcoxon rank-sum test determines the significance of performance differences between Align-RUDDER and the other methods.
Task (I) environment is a 12×12 gridworld with four rooms. The target is in room #4, and the start is in room #1 with 20 portal entry locations. The state contains the portal entry for each episode. Fig. 5 shows the number of episodes required for achieving 80% of the average reward of the demonstrations for different numbers of demonstrations. Results are averaged over 100 trials. Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−10). Task (II) is a 12×24 gridworld with eight rooms: target in room #8, and start in room #1 with 20 portal entry locations. Fig. 5 shows the results with settings as in Task (I). Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−19). We also conduct an ablation study to study performance of Align-RUDDER, while changing various parameters, like environment stochasticity (See Sec. A.6.4) and number of clusters (See Sec. A.6.5).
MineCraft. We further test Align-RUDDER on MineCraft ObtainDiamond task from the MineRL dataset (Guss et al., 2019b). We do not use intermediate rewards given by achieving subgoals from the challenge, since Align-RUDDER is supposed to discover such sub-goals automatically via reward redistribution. We only give a reward for mining the diamond. This requires resource gathering and tool building in a hierarchical way. To the best of our knowledge, no pure learning method (sub-goals are also learned) has mined a diamond yet (Scheller et al., 2020). The dataset contains demonstrations which are insufficient to directly learn a single policy (117 demonstrations, 67 mined a diamond).
Implementation: (1) A state consists of visual input and an inventory (incl. equip state). Both inputs are normalized to the same information, that is, the same number of components and the same variance. We cluster the differences of consecutive states (Arjona-Medina et al., 2019). Very large clusters are removed, and small merged, giving 19 clusters corresponding to events, which are characterized by inventory changes. Finally, demonstrations are mapped to sequences of events. (2) The scoring matrix is computed according to (II). (3) The ten shortest demonstrations that obtained a diamond are aligned by ClustalW with zero gap penalties and no biological options. (4) The multiple alignments gives a profile model and a PSSM. (5) The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. Based on the reward redistribution, we define sub-goals. Sub-goals are identified as profile model positions that obtain an average redistributed reward above a threshold for the demonstrations. Demonstration subsequences between sub-goals are considered as demonstrations for the sub-tasks. New sub-sequences generated by the agent are aligned to the profile model to determine whether a sub-goal is achieved. The redistributed reward between two sub-goals is given at the end of the sub-sequence, therefore, the sub-tasks also have an episodic reward. Fig. 4 shows how sub-goals are identified. Sub-agents are pre-trained on the demonstrations for the sub-tasks using BC, and further trained in the environment using Proximal Policy Optimization (PPO) (Schulman et al., 2018). BC pre-training corresponds to RUDDER’s Q-value estimation (learning method (A) from above), while PPO corresponds to RUDDER’s PPO training (learning method (B) from above).
Our main agent can perform all actions but additionally can execute sub-agents and learns via the redistributed reward. The main agent corresponds to and is treated like a Manager module (Vezhnevets et al., 2017). The main agent is initialized by executing sub-agents according to the alignment but can deviate from this strategy. When a sub-agent successfully completes its task, the main agent executes the next sub-agent according to the alignment. More details can be found in Appendix Sec. A.7.1. Using only ten demonstrations, Align-RUDDER is able to learn to mine a diamond. A diamond is obtained in 0.1% of the cases. With 0.5 success probability for each of the 31 extracted sub-tasks (skilled agents not random agents), the resulting success rate for mining the diamond would be 4.66× 10−10. Tab. 1 shows a comparison of methods on the MineCraft MineRL dataset by the maximum item score (Milani et al., 2020). Results are taken from (Milani et al., 2020), in particular from Figure 2, and completed by (Skrynnik et al., 2019; Kanervisto et al., 2020; Scheller et al., 2020). Align-RUDDER was not evaluated during the MineCraft MineRL challenge, but it follows the timesteps limit (8 million) imposed by the challenge. Align-RUDDER did not receive the intermediate rewards provided by the challenge that hint at sub-tasks, thus tries to solve a more difficult task. Recently, ForgER++ (Skrynnik et al., 2020) was able to mine a diamond in 0.0667% of the cases. We do not include it in Table 1 as it did not have any limitations on the number of timesteps. Also, ForgER++ generates sub-goals for MineCraft using a heuristic, while Align-RUDDER uses redistributed reward to automatically obtain sub-goals.
Analysis of MineCraft Agent Behaviour. For each agent and its sub-task, we estimate the success rate and its improvement during fine-tuning by averaging over return of multiple runs (see Fig. 6). For earlier sub-tasks, the agent has a relatively higher sub-task success rate. This also corresponds to the agent having access to much more data for earlier sub-tasks. During learning from demonstrations, much less data is available for training for later sub-tasks, as not all expert demonstrations achieve the later tasks. During online training using reinforcement learning, an agent has to successfully complete all earlier sub-tasks to generate trajectories for later sub-tasks. This is exponentially difficult. Lack of demonstrations and difficulty of the learned agent to generate data for later sub-tasks leads to degradation of the success rate in MineCraft.
5 RELATED WORK
Learning from demonstrations has been widely studied over the last 50 years (Billard et al., 2008). An example is imitation learning, which uses supervised techniques when the number of demonstrations is large enough (Michie et al., 1990; Pomerleau, 1991; Michie & Camacho, 1994; Schaal, 1996; Kakade & Langford, 2002). However, policies trained with imitation learning tend to drift away from demonstration trajectories due to a distribution shift (Ross & Bagnell, 2010). This effect can be mitigated (III et al., 2009; Ross & Bagnell, 2010; Ross et al., 2011; Judah et al., 2014; Sun et al., 2017;
2018). Many approaches use demonstrations for initialization, e.g. of policy networks (Taylor et al., 2011; Silver et al., 2016), value function networks (Hester et al., 2017; 2018), both networks (Zhang & Ma, 2018; Nair et al., 2018), or an experience replay buffer (Hosu & Rebedea, 2016). Beyond initialization, demonstrations are used to define constraints (Kim et al., 2013), generate sub-goals (Eysenbach et al., 2019), enforce regularization (Reddy et al., 2020), guide exploration (Subramanian et al., 2016; Jing et al., 2019), or shape rewards (Judah et al., 2014; Brys et al., 2015; Suay et al., 2016). Demonstrations may serve for inverse reinforcement learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016), which aims at learning a (non-sparse) reward function that best explains the demonstrations. Learning reward functions requires a large number of demonstrations (Syed & Schapire, 2007; Ziebart et al., 2008; Silva et al., 2019). Some approaches rely on few-shot or/and meta learning (Duan et al., 2017; Finn et al., 2017; Zhou et al., 2020). However, few-shot and meta learning demand a large set of auxiliary tasks or prerecorded data. Concluding, most methods that learn from demonstrations rely on the availability of many demonstrations (Khardon, 1999; Lopes et al., 2009), in particular, if using deep learning methods (Bengio & Lecun, 2007; Lakshminarayanan et al., 2016). Some methods can learn on few demonstrations like Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018).
6 DISCUSSION AND CONCLUSION
Discussion. Firstly, reward redistributions do not change the optimal policies (see Theorem 1 in Appendix). Thus, suboptimal reward redistributions due to alignment errors or choosing events that are non-essential for reaching the goal might not speed up learning, but also do not change the optimal policies. Secondly, while Align-RUDDER can speed up learning even in complex environments, the resulting performance depends on the quality of the alignment model. A low quality alignment model can arise from multiple factors, one of which is having large number ( 20) of distinct events. Clustering can be used to reduce the number of events, which could also lead to a low quality alignment model if too many relevant events are clustered together. While the optimal policy is not changed by poor demonstration alignment, the benefit of employing reward redistribution based on it diminishes. Thirdly, the alignment could fail if the demonstrations have different underlying strategies i.e no events are common in the demonstrations. We assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other and can be aligned. However, if no underlying strategy exists, then identifying those relevant events via alignment, which should receive high redistributed rewards, may fail. In this case, reward is given at sequence end, when the redistributed reward is corrected, which leads to an episodic reward without reducing the delay of the rewards and speeding up learning.
Conclusions. We have introduced Align-RUDDER to solve highly complex tasks with delayed and sparse reward from few demonstrations. We have shown experimentally that Align-RUDDER outperforms state of the art methods designed for learning from demonstrations in the regime of few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is, to the best of our knowledge, the first pure learning method to mine a diamond.
ETHICS STATEMENT
Impact on ML and related scientific fields. Our research has the potential to positively impact a wide variety of fields of life due to its general applicability. Most importantly, it has the potential to reduce the cost for training and deploying agents in real world applications and therefore enable systems that have not been possible until now.
However, any new development in machine learning can be applied for good or for bad. Our system can be used for medical applications where it can save life but it could be used for malevolent systems. It is the society that decides how new technology is employed. However, we as scientist have to inform the society and the decision makers about our technologies. We have to show the limits of our technology, to give ideas of possible applications, to point out possible misuse or erroneous operation of our new technology.
Impact on society. A big danger is that users rely too much on our new approach and use it without reflecting on the outcomes. For example, in medical treatment decisions doctors may rely on the technical system and push are the responsibility toward the machine: “The machine suggested this treatment, therefore it is not my fault”. Another example is self-driving cars where we see that drivers become more careless even if they are supposed to pay attention and keep the hands on the steering wheel. They trust too much in the technology, even if the technology does not justify this trust or is not mature.
Finally, our method can be deployed in companies for job automation. Therefore there is the danger that some people lose their jobs, particularly those whose work is to perform predictable and repetitive tasks. An often used example is the taxi driver who would lose their job because of self-driving cars. The same holds for many jobs in production industry where automation can replace jobs. However all industrialization led to loss of jobs but new jobs have been created.
Consequences of failures of the method. Depending on the application area, a failure of this method might be of lesser concern, such as a failed execution of a computer program. If our method is employed within a larger automation system, a failure can result in damages such as a car accident. However, this holds for almost all reinforcement learning methods, and usage and testing falls within the responsibility of the application area. We note that in this work, the method was only used in computer game environments.
Leveraging of biases in the data and potential discrimination. Our proposed method relies on human demonstrations and thereby human decisions, which are usually strongly biased. As almost all machine learning methods trained on human-influenced data, our method could learn to use and exploit those biases and make similar decisions (Solaiman et al., 2019). Therefore, the responsible use of our method depends on a careful selection of the training data and awareness of the potential biases within those.
REPRODUCIBILITY STATEMENT
Code for experiments on the FourRooms and EightRooms environment is included as supplementary material. The README contains step-by-step instructions to set up an environment and run the experiments. We have specified all the training details ex. hyperparameters and how they were chosen in the Appendix (See Section A.6). We trained 100 replicates for each datapoint of the first set of experiments and are shown in Fig. 5. Using the code in the supplementary material, it is quite easy to reproduce our results for these experiments.
We also include code for the experiments done for MineCraft in the supplementary materials. All the preprocessing steps, hyperparameters and other implementation details are given in the Appendix (See Section A.7).
We also provide a deeper overview of the RUDDER (Arjona-Medina et al., 2019) theory in the Appendix (See Section A.2) as it is important for many design choices in Align-RUDDER.
Finally, a video showcasing the MineCraft agent is also provided as supplementary material.
A APPENDIX
CONTENTS OF THE APPENDIX
A.1 Introduction to the Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A.2 Review Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A.3 The Five Steps of Align-RUDDER’s Reward Redistribution . . . . . . . . . . . . . 25
A.4 Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
A.5 Extended Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A.6 Artificial Task Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.1 Hyperparameter Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.2 Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.3 Artificial Task p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A.6.4 Stochastic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
A.6.5 Changing number of Clusters . . . . . . . . . . . . . . . . . . . . . . . . 32
A.6.6 Key-Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
A.7 Minecraft Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.7.1 MineCraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.7.2 Related Work and Steps Towards a General Agent . . . . . . . . . . . . . 34
A.7.3 The Five Steps of Align-RUDDER Demonstrated on Minecraft . . . . . . 36
A.7.4 Implementation of our Algorithm for Minecraft . . . . . . . . . . . . . . . 38
A.7.5 Policy and Value Network Architecture . . . . . . . . . . . . . . . . . . . 39
A.7.6 Imitation Learning of Sub-Task Agents . . . . . . . . . . . . . . . . . . . 40
A.7.7 Reinforcement Learning on Sub-Task Agents . . . . . . . . . . . . . . . . 41
A.8 Reproducing the Artificial Task Results . . . . . . . . . . . . . . . . . . . . . . . 41
A.9 Software Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.10 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
LIST OF FIGURES
A.2 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 29
A.3 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30
A.4 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30
A.5 FourRooms and EightRooms environments . . . . . . . . . . . . . . . . . . . . . 31
A.6 Reward redistribution for the FourRooms and EightRooms environments . . . . . . 31
A.11 Step (I): Define events and map demonstrations into sequences of events. First, we extract the sequence of states from human demonstrations, transform images into feature vectors using a pre-trained network and transform them into a sequence of consecutive state deltas (concatenating image feature vectors and inventory states). We cluster the resulting state deltas and remove clusters with a large number of members and merge smaller clusters. In the case of demonstrations for the ObtainDiamond task in Minecraft the resulting clusters correspond to obtaining specific resources and items required to solve the task. Then we map the demonstrations to sequences of events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A.12 Step (II): Construct a scoring matrix using event probabilities from demonstrations for diagonal elements and setting off-diagonal to a constant value. The scores in the diagonal position are proportional to the inverse of the event frequencies. Thus, aligning rare events has higher score. Darker colors signify higher score values. . . 36
A.13 Step (III) Perform multipe sequence alignment (MSA) of the demonstrations. The MSA algorithm maximizes the pairwise sum of scores of all alignments. The score of an alignment at each position is given by the scoring matrix. As the off-diagonal entries are negative, the algorithm will always try to align an event to itself, while giving preference to events which give higher scores. . . . . . . . . . . . . . . . . 37
A.14 Step (IV) Compute a position-specific scoring matrix (PSSM). This matrix can be computed using the MSA (Step (III)) and the scoring matrix (Step (II)). Every column entry is for a position from the MSA. The score at a position (column) and for an event (row) depends on the frequency of that event at that position in the MSA. For example, the event in the last position is present in all the sequences, and thus gets a high score at the last position. But it is absent in the remaining position, and thus gets a score of zero elsewhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
A.15 Step (V) A new sequence is aligned step by step to the profile model using the PSSM, resulting in an alignment score for each sub-sequence. The redistributed reward is then proportional to the difference of scores of subsequent alignments. . . . . . . . 37
A.16 Conceptual overview of our MineRL agent . . . . . . . . . . . . . . . . . . . . . . 38
A.17 Conceptual architecture of Align-RUDDER MineRL policy and value networks . . 39
A.18 Discretization and interpolation of camera angles . . . . . . . . . . . . . . . . . . 40
A.19 Mapping of clusters to letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
A.20 Trajectory replay given by an exemplary consensus . . . . . . . . . . . . . . . . . 41
A.1 INTRODUCTION TO THE APPENDIX
This is the appendix to the paper “Align-RUDDER: Learning from few Demonstrations by Reward Redistribution”. The appendix aims at supporting the main document and provides more detailed information about the implementation of our method for different tasks. The content of this document is summarized as follows:
• Section A.3 describes the five steps of Align-RUDDER’s reward redistribution in more detail. In particular, the scoring systems are described in more detail. • Section A.4 provides a brief overview of sequence alignment methods and the hyperparameters used in our experiments. • Section A.6 provides figures and tables to support the results of the experiments in Artificial Tasks (I) and (II). • Section A.7 explains in detail the experiments conducted in the Minecraft ObtainDiamond task.
A.2 REVIEW REWARD REDISTRIBUTION
Reward redistribution and return decomposition are concepts introduced in RUDDER but also apply to Align-RUDDER as it is a variant of RUDDER. Reward redistribution based on return decomposition eliminates – or at least mitigates – delays of rewards while preserving the same optimal policies. Align-RUDDER is justified by the theory of return decomposition and reward redistribution when using multiple sequence alignment for constructing a reward redistribution model. In this section, we review the concepts of return decomposition and reward redistribution.
Preliminaries. We consider a finite MDP defined by the 5-tuple P = (S,A,R, p, γ) where the state space S and the action space A are sets of finite states s and actions a and R the set of bounded rewards r. For a given time step t, the corresponding random variables are St, At and Rt+1. Furthermore, P has transition-reward distributions p(St+1 = s′, Rt+1 = r | St = s,At = a), and a discount factor γ ∈ (0, 1], which we keep at γ = 1. A Markov policy π(a | s) is a probability of an action a given a state s. We consider MDPs with finite time horizon or with an absorbing state. The discounted return of a sequence of length T at time t is Gt = ∑T−t k=0 γ
kRt+k+1. As usual, the Q-function for a given policy π is qπ(s, a) = Eπ [Gt | St = s,At = a]. Eπ[x | s, a] is the expectation of x, where the random variable is a sequence of states, actions, and rewards that is generated with transition-reward distribution p, policy π, and starting at (s, a). The goal is to find an optimal policy π∗ = argmax π Eπ[G0] maximizing the expected return at t = 0. We assume that the states s are time-aware (time t can be extracted from each state) in order to assure stationary optimal policies. According to Proposition 4.4.3 in (Puterman, 2005), a deterministic optimal policy π∗ exists.
Definitions. A sequence-Markov decision process (SDP) is defined as a decision process that has Markov transition probabilities but a reward probability that is not required to be Markov. Two SDPs P̃ and P with different reward probabilities are return-equivalent if they have the same expected return at t = 0 for each policy π, and strictly return-equivalent if they additionally have the same expected return for every episode. Since for every π the expected return at t = 0 is the same, return-equivalent SDPs have the same optimal policies. A reward redistribution is a procedure that —for a given sequence of a delayed reward SDP P̃— redistributes the realization or expectation of its return G̃0 along the sequence. This yields a new SDP P with R as random variable for the redistributed reward and the same optimal policies as P̃: Theorem 1 (Arjona-Medina et al. (2019)). Both the SDP P̃ with delayed reward R̃t+1 and the SDP P with redistributed reward Rt+1 have the same optimal policies.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
The delay of rewards is captured by the expected future rewards κ(m, t − 1) at time (t − 1). κ is defined as κ(m, t− 1) := Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1], that is, at time (t− 1) the expected sum of future rewards from Rt+1 to Rt+1+m but not the immediate reward Rt. A reward redistribution is defined to be optimal, if κ(T − t − 1, t) = 0 for 0 6 t 6 T − 1, which is equivalent to Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1): Theorem 2 (Arjona-Medina et al. (2019)). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a second order Markov reward redistribution, which ensures that P is return-equivalent to P̃ . For a specific π, the following two statements are equivalent:
(I) κ(T − t− 1, t) = 0, i.e. the reward redistribution is optimal,
(II) Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (2)
An optimal reward redistribution fulfills for 1 6 t 6 T and 0 6 m 6 T − t: κ(m, t− 1) = 0.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
This theorem shows that an optimal reward redistribution relies on steps q̃π(st, at)− q̃π(st−1, at−1) of the Q-function. Identifying the largest steps in the Q-function detects the largest rewards that have to be redistributed, which makes the largest progress towards obtaining an optimal reward redistribution.
Corollary 1 (Higher order Markov reward redistribution optimality conditions). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is return-equivalent to P̃ . If for a specific π
Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (3)
holds, then the higher order reward redistribution Rt+1 is optimal, that is, κ(T − t− 1, t) = 0.
Proof. The proof is just PART (II) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.
We assume that
Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) , (4)
where we abbreviate the expected Rt+1 by ht:
Eπ [Rt+1 | st−1, at−1, st, at] = ht . (5)
The expectations Eπ [. | st−1, at−1] like Eπ [ R̃T+1 | st−1, at−1 ] are expectations over all episodes
that contain the state-action pair (st−1, at−1) at time t−1. The expectations Eπ [. | st−1, at−1, st, at] like Eπ [ R̃T+1 | st−1, at−1, st, at ] are expectations over all episodes that contain the state-action
pairs (st−1, at−1) at time t− 1 and (st, at) at time t. The Q-values are defined as
q̃π(st, at) = Eπ [ T−t∑ k=0 R̃t+k+1 | st, at ] = Eπ [ R̃T+1 | st, at ] , (6)
qπ(st, at) = Eπ [ T−t∑ k=0 Rt+k+1 | st, at ] , (7)
which are expectations over all trajectories that contain (st, at) at time t. Since P̃ is Markov, for q̃π only the suffix trajectories beginning at (st, at) enter the expectation.
The definition of κ(m, t − 1) for 1 6 t 6 T and 0 6 m 6 T − t was κ(m, t − 1) = Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1]. We have to proof κ(T − t− 1, t) = 0.
First, we consider m = 0 and 1 6 t 6 T , therefore κ(0, t − 1) = Eπ [Rt+1 | st−1, at−1]. Since the original MDP P̃ has episodic reward, we have r̃(st−1, at−1) = E [ R̃t | st−1, at−1 ] = 0 for
1 6 t 6 T . Therefore, we obtain: q̃π(st−1, at−1) = r̃(st−1, at−1) + ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) (8)
= ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) .
Using this equation we obtain for 1 6 t 6 T :
κ(0, t− 1) = Eπ [Rt+1 | st−1, at−1] (9) = Est,at [q̃ π(st, at) − q̃π(st−1, at−1) | st−1, at−1]
= ∑ st,at p(st, at | st−1, at−1) (q̃π(st, at) − q̃π(st−1, at−1))
= q̃π(st−1, at−1) − ∑ st,at p(st, at | st−1, at−1) q̃π(st−1, at−1) = q̃π(st−1, at−1) − q̃π(st−1, at−1) = 0 .
Next, we consider the expectation of ∑m τ=0Rt+1+τ for 1 6 t 6 T and 1 6 m 6 T − t (for m > 0)
κ(m, t− 1) = Eπ [ m∑ τ=0 Rt+1+τ | st−1, at−1 ] (10)
= Eπ [ m∑ τ=0 (q̃π(sτ+t, aτ+t) − q̃π(sτ+t−1, aτ+t−1)) | st−1, at−1 ] = Eπ [q̃ π(st+m, at+m) − q̃π(st−1, at−1) | st−1, at−1]
= Eπ [ Eπ [ T∑
τ=t+m
R̃τ+1 | st+m, at+m ] | st−1, at−1 ]
− Eπ [ Eπ [ T∑
τ=t−1 R̃τ+1 | st−1, at−1
] | st−1, at−1 ] = Eπ [ R̃T+1 | st−1, at−1 ] − Eπ [ R̃T+1 | st−1, at−1
] = 0 .
We used that R̃t+1 = 0 for t < T .
For the particualr cases t = τ + 1 and m = T − t = T − τ − 1 we have
κ(T − τ − 1, τ) = 0 . (11)
That is exactly what we wanted to proof.
Corollary 1 explicitly states that the optimality criterion ensures an optimal reward redistribution even if the reward redistribution is higher order Markov. For Align-RUDDER we may obtain a higher order Markov reward redistribution due to the profile alignment of the sub-sequences.
Corollary 2 (Higher order Markov reward redistribution optimality representation). We assume a delayed reward MDP P̃ , with episodic reward and that a new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is strictly return-equivalent to P̃ . We assume that the reward redistribuition is optimal, that is, κ(T − t − 1, t) = 0. If the condition
Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] (12)
holds, then
Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) . (13)
Proof. By and large, the proof is PART (I) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.
We assume that the reward redistribution is optimal, that is,
κ(T − t− 1, t) = 0 . (14)
We abbreviate the expected Rt+1 by ht:
Eπ [Rt+1 | st−1, at−1, st, at] = ht . (15)
In (Arjona-Medina et al., 2019) Lemma A4 is as follows.
Lemma 1. Two strictly return-equivalent SDPs P̃ and P have the same expected return for each start state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :
Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] . (16)
The assumptions of Lemma 1 hold for for the delayed reward MDP P̃ and the redistributed reward SDP P , since a reward redistribution ensures strictly return-equivalent SDPs. Therefore for a given state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :
Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] (17)
with G0 = ∑T τ=0Rτ+1 and G̃0 = R̃T+1. The Markov property of the MDP P̃ ensures that the future reward from t+ 1 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:
Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] = Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] . (18)
According to Eq. (12), the future reward from t + 2 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:
Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] . (19)
Using these properties we obtain
q̃π(st, at) = Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] (20)
= Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] = Eπ [ R̃T+1 | s0, a0, . . . , st, at
] = Eπ
[ T∑ τ=0 R̃τ+1 | s0, a0, . . . , st, at ] = Eπ [ G̃0 | s0, a0, . . . , st, at
] = Eπ [G0 | s0, a0, . . . , st, at]
= Eπ [ T∑ τ=0 Rτ+1 | s0, a0, . . . , st, at ]
= Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] + t∑ τ=0 hτ
= Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] + t∑ τ=0 hτ
= κ(T − t− 1, t) + t∑
τ=0
hτ
= t∑ τ=0 hτ .
We used the optimality condition
κ(T − t− 1, t) = Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = 0 . (21)
It follows that
Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) . (22)
This is exactly what we wanted to proof.
This corollary shows that optimal reward redistributions can be expressed as difference of Q-values if Eq. (12) holds. Eq. (12) states that the past can be averaged out. However, there may exist optimal reward redistributions for which Eq. (12) does not hold.
If the reward redistribution is optimal, the Q-values of P are given by qπ(st, at) = q̃π(st, at) − ψπ(st) and therefore P̃ and P have the same advantage function: Theorem 3 (Arjona-Medina et al. (2019)). If the reward redistribution is optimal, then the Q-values of the SDP P are qπ(st, at) = r(st, at) and
qπ(st, at) = q̃ π(st, at) − Est−1,at−1 [q̃π(st−1, at−1) | st] = q̃π(st, at) − ψπ(st) . (23)
The SDP P and the original MDP P̃ have the same advantage function.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
For an optimal reward redistribution only the expectation of the immediate reward r(st, at) = Eπ [Rt+1 | st, at] must be estimated. This considerably simplifies learning. Learning methods according to Arjona-Medina et al. (2019). The redistributed reward serves as reward for a subsequent learning method, which can be Type A, B, and C as described in ArjonaMedina et al. (2019). Type A methods estimate the Q-values. They can be estimated directly according to Eq. (23) assuming an optimal redistribution (Type A variant i). Q-values can be corrected for a non-optimal reward redistribution by additionally estimating κ (Type A variant ii). Q-value estimation can use eligibility traces (Type A variant iii). Type B methods use the redistributed rewards for policy gradients like Proximal Policy Optimization (PPO) Schulman et al. (2018). Type C methods use TD learning like Q-learning Watkins (1989), where immediate and future reward must be drawn together as typically done. For all these learning methods, demonstrations can be used for initialization (e.g. experience replay buffer) or pre-training (e.g. policy network with behavioral cloning). Recently, the convergence of RUDDER learning methods has been proven under commonly used assumptions (Holzleitner et al., 2020).
Non-optimal reward redistribution and Align-RUDDER. According to Theorem 1, non-optimal reward redistributions do not change the optimal policies. The value κ(T − t− 1, t) measures the remaining delayed reward. The smaller κ is, the faster is the learning process. For Monte Carlo (MC) estimates, smaller κ reduces the variance of the future rewards, and, therefore the variance of the estimation. For temporal difference (TD) estimates, smaller κ reduces the amount of information that has to flow back. Align-RUDDER dramatically reduces the amount of delayed rewards by identifying key events via multiple sequence alignment, to which reward is redistributed. For an episodic MDP, a reward that is redistributed to time t reduces all κ(m, τ) with t 6 τ < T by the expectation of the reward. Therefore, in most cases Align-RUDDER makes κ-values much smaller.
A.3 THE FIVE STEPS OF ALIGN-RUDDER’S REWARD REDISTRIBUTION
The new reward redistribution approach consists of five steps, see Fig. A.1: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model and the PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1).
(I) Defining Events. Alignment techniques assume that sequences consist of few symbols, e.g. about 20 symbols, the events. It is crucial to keep the number of events small in order to increase the difference between a random alignment and an alignment of demonstrations. If there are many events, then two demonstrations might have few events that can be matched, which cannot be well distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978). The events can be the original state-action pairs, clusters thereof, or other representations of state-action pairs, e.g. indicating changes of inventory, health, energy, skills etc. In general, we define events as a cluster of states or state-actions. A sequence of events is obtained from a state-action sequence by substituting states or state-actions by their cluster identifier. In order to cluster states, a similarity measure between them is required. We suggest to use the “successor representation” (Dayan, 1993) of the states, which gives a similarity matrix based on how connected two states are given a policy. Successor representation have been used before (Machado et al., 2017; Ramesh et al., 2019) to obtain important events, for option learning. For computing the successor representation, we use the demonstrations combined with state-action sequences generated by a random policy. For high dimensional state spaces “successor features” (Barreto et al., 2017) can be used. We use similarity-based clustering methods like affinity propagation (AP) (Frey & Dueck, 2007). For AP the similarity matrix does not have to be symmetric and the number of clusters need not be known. State action pairs (s, a) are mapped to events e.
(II) Determining the Alignment Scoring System. Alignment algorithms distinguish similar sequences from dissimilar sequences using a scoring system. A scoring matrix S has entries si,j that give the score for aligning event i with j. The MSA score SMSA of a multiple sequence alignment is the sum of all pairwise scores: SMSA = ∑ i,j,i<j ∑L t=0 sxi,t,xj,t , where xi,t means that event xi,t is at position t for sequence τi = ei,0:T in the alignment, analog for xj,t and the sequence τj = ej,0:T , andL is the alignment length. Note thatL ≥ T and xi,t 6= ei,t, since gaps are present in the alignment. In the alignment, events should have the same probability of being aligned as they would have if we know the strategy and align demonstrations accordingly. The theory of high scoring segments gives a scoring scheme with these alignment probabilities (Karlin & Altschul, 1990; Karlin et al., 1990; Altschul et al., 1990). Event i is observed with probability pi in the demonstrations, therefore a random alignment aligns event i with j with probability pipj . An alignment algorithm maximizes the MSA score SMSA and, thereby, aligns events i and j with probability qij for demonstrations. High values of qij means that the MSA often aligns events i and j in the demonstrations using the scoring matrix S with entries si,j . According to Theorem 2 and Equation [3] in Karlin & Altschul (1990), asymptotically with the sequence length, we have si,j = ln(qij/(pipj))/λ∗, where λ∗ is the unique positive root of ∑n,n i=1,j=1 pipj exp(λsi,j) = 1 (Equation [4] in Karlin & Altschul (1990)). We can now choose a desired probability qij and then compute the scoring matrix S with entries si,j . High values of qij should indicate relevant events for the strategy. A priori, we only know that a relevant event should be aligned to itself, while we do not know which events are relevant. Therefore we set qij to large values for every i = j and to low values for i 6= j. Concretely, we set qij = pi − for i = j and qij = /(n− 1) for i 6= j, where n is the number of different possible events. Events with smaller pi receive a higher score si,i when aligned to themselves since this self-match is less often observed when randomly matching events (pipi is the probability of a random self-match). Any prior knowledge about events should be incorporated into qij .
(III) Multiple sequence alignment (MSA). MSA first produces pairwise alignments between all demonstrations. Then, a guiding tree (agglomerative hierarchical clustering) is produced via hierarchical clustering sequences, according to their pairwise alignment scores. Demonstrations which follow the same strategy appear in the same cluster in the guiding tree. Each cluster is aligned separately via MSA to address different strategies. However, if there is not a cluster of demonstrations, then the alignment will fail. MSA methods like ClustalW (Thompson et al., 1994) or MUSCLE (Edgar, 2004) can be used.
(IV) Position-Specific Scoring Matrix (PSSM) and Profile. From the final alignment, we construct a) an MSA profile (column-wise event frequencies qi,j) and b) a PSSM (Stormo et al., 1982) which is used for aligning new sequences to the profile of the MSA. To compute the PSSM (column-wise scores si,t), we apply Theorem 2 and Equation [3] in Karlin & Altschul (1990). Event i is observed with probability pi in the data. For each position t in the alignment, we compute qi,t, which indicates the frequency of event i at position t. The PSSM is si,t = ln(qi,t/pi)/λ∗t , where λ ∗ t is the single
unique positive root of ∑n i=1 pi exp(λsi,t) = 1 (Equation [1] in Karlin & Altschul (1990)). If we
align a new sequence that follows the underlying strategy (a new demonstration) to the profile model, we would see that event i is aligned to position t in the profile with probability qi,t.
(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L t=0 sxt,t. Here, si,t is the alignment score for event i and xt is the event of τ at position t in the alignment. L is the profile length, where L ≥ T and xt 6= et, because of gaps in the alignment. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is
Rt+1 = (S(τt) − S(τt−1))C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1,
(24) where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] and G̃0 = ∑T t=0 R̃t+1 is the original
return of the sequence τ and S(τt−1) = 0. Edemo is the expectation over demonstrations, and C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence, since G0 =∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past, that is, Rt+1 = h((s, a)0:t). For computational efficiency, the alignment of τt−1 can be extended to one for τt, like exact matches are extended to high-scoring sequence pairs with the BLAST algorithm (Altschul et al., 1990; 1997).
Sub-tasks. The reward redistribution identifies sub-tasks, which are alignment positions with high redistributed reward. It also determines the terminal states and automatically assigns reward for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the reward is Markov. For redistributed reward that is Markov, the option framework (Sutton et al., 1999), the MAXQ framework (Dietterich, 2000), or recursive composition of option models (Silver & Ciosek, 2012) can be used as subsequent approaches to hierarchical reinforcement learning.
A.4 SEQUENCE ALIGNMENT
In bioinformatics, sequence alignment identifies regions of significant similarity among different biological sequences to establish evolutionary relationships between those sequences. In 1970, Needleman and Wunsch proposed a global alignment method based on dynamic programming (Needleman & Wunsch, 1970). This approach ensures the best possible alignment given a substitution matrix, such as PAM (Dayhoff, 1978) or BLOSUM(Henikoff & Henikoff, 1992), and other parameters to penalize gaps in the alignment. The method of Needlemann and Wunsch is of O(mn) complexity both in memory and time, which could be prohibitive in long sequences like genomes. An optimization of this method by Hirschberg (1975), reduces memory to O(m+ n), but still requires O(mn) time.
Later, Smith and Waterman developed a local alignment method for sequences (Smith & Waterman, 1981). It is a variation of Needleman and Wunsch’s method, keeping the substitution matrix and the gap-scoring scheme but setting cells in the similarity matrix with negative scores to zero. The complexity for this algorithm is of O(n2M). Osamu Gotoh published an optimization of this method, running in O(mn) runtime (Gotoh, 1982).
The main difference between both methods is the following:
• The global alignment method by Needleman and Wunsch aligns the sequences fixing the first and the last position of both sequences. It attempts to align every symbol in the sequence, allowing some gaps, but the main purpose is to get a global alignment. This is especially useful when the two sequences are highly similar. For instance:
ATCGGATCGACTGGCTAGATCATCGCTGG CGAGCATC-ACTGTCT-GATCGACCTTAG * *** **** ** **** * * *
• As an alternative to global methods, the local method of Smith and Waterman aligns the sequences with a higher degree of freedom, allowing the alignment to start or end with gaps.
This is extremely useful when the two sequences are substantially dissimilar in general but suspected of having a highly related sub region.
ATCAAGGAGATCATCGCTGGACTGAGTGGCT----ACGTGGTATGT ATC----CGATCATCGCTGG-CTGATCGACCTTCTACGT------*** ************ **** * * ****
A.4.0.1 Multiple Sequence Alignment algorithms. The sequence alignment algorithms by Needleman and Wunsch and Smith and Waterman are limited to aligning two sequences. The approaches for generalizing these algorithms to multiple sequences can be classified into four categories:
• Exact methods (Wang & Jiang, 1994). • Progressive methods: ClustalW (Thompson et al., 1994), Clustal Omega (Sievers et al.,
2014), T-Coffee (Notredame et al., 2000). • Iterative and search algorithms: DIALIGN (Morgenstern, 2004), MultiAlign (Corpet, 1988). • Local methods: eMOTIF (Mccammon & Wolynes, 1998), PROSITE (Bairoch & Bucher,
1994).
For more details, visit Sequence Comparison: Theory and methods (Chao & Zhang, 2009).
In our experiments, we use ClustalW from Biopython (Cock et al., 2009) with the following parameters:
clustalw2 -ALIGN -CLUSTERING=UPGMA -NEGATIVE " \ "-INFILE={infile} -OUTFILE={outfile} " \ "-PWMATRIX={scores} -PWGAPOPEN=0 -PWGAPEXT=0 " \ "-MATRIX={scores} -GAPOPEN=0 -GAPEXT=0 -CASE=UPPER " \ "-NOPGAP -NOHGAP -MAXDIV=0 -ENDGAPS -NOVGAP " \ "-NEWTREE={outputtree} -TYPE=PROTEIN -OUTPUT=GDE
where the PWMATRIX and MATRIX are computed according to step (II) in Sec. 3 of the main paper.
A.5 EXTENDED RELATED WORK
Align-RUDDER allows to identify sub-goals and sub-tasks, therefore it is related to hierarchical reinforcement learning (HRL) approaches like the option framework (Sutton et al., 1999), | 1. What is the focus of the paper regarding RUDDER-style algorithms?
2. What are the strengths of the proposed approach, particularly in the experimental evaluation?
3. What are the weaknesses of the paper, especially in terms of writing and experimentation?
4. How does the reviewer assess the clarity, quality, novelty, and reproducibility of the paper's content? | Summary Of The Paper
Review | Summary Of The Paper
The paper considers the challenge of improving sample efficiency of RUDDER-style algorithms in sparse MDPs. Building on prior work by Arjona-Medina et al [1], the authors incorporate demonstrations of optimal trajectories from an expert in the training pipeline. Additionally, to improve the sample efficiency and stability, the authors replace LSTM-model of RUDDER with an alignment based profile model.
The approach is evaluated on two synthetic grid-world based environments and a MineCraft based environment. On both benchmarks the proposed algorithm works better than baseline RUDDER.
Review
Strengths:
The empirical evaluation is well laid out and the experiments are described with necessary details. The authors perform careful evaluation on multiple (synthetic) environments to build intuition for and motivate the changes in the training algorithm.
Weakness:
The paper writing could be improved to better explain the prior work and segregating the core contributions of the work (e.g reward redistribution builds on RUDDER).
The experiments are restricted to navigation based grid-world style environments. It would be useful to have more comparison on other benchmark tasks like locomotion. |
ICLR | Title
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Abstract
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
N/A
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
1 INTRODUCTION
Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards (Sutton & Barto, 2018; Rahmandad et al., 2009; Luoma et al., 2017). For delayed rewards, temporal difference (TD) suffers from vanishing information (Arjona-Medina et al., 2019). On the other hand Monte Carlo (MC) has high variance since it must average over all possible futures (ArjonaMedina et al., 2019). Monte-Carlo Tree Search (MCTS), used for Go and chess, can handle delayed and rare rewards since it has a perfect environment model (Silver et al., 2016; 2017). RUDDER (Arjona-Medina et al., 2019; 2018) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given. RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network. However, for complex tasks, current exploration strategies find episodes with high rewards only after an incommensurate long time. Humans and animals obtain high reward episodes by teachers, role models, or prototypes. Along this line, we assume that episodes with high rewards are given as demonstrations. Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies, typically, only a few demonstrations are available. However, RUDDER’s LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997a) as a deep learning method requires many examples for learning. Therefore, we introduce Align-RUDDER, which replaces RUDDER’s LSTM with a profile model obtained from multiple sequence alignment (MSA) of the demonstrations. Profile models are well known in bioinformatics. They are used to score new sequences according to their sequence similarity to the aligned sequences. Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model—, which considerably speeds up learning even if only a few demonstrations are available.
Our main contributions are:
• We suggest a reinforcement algorithm that works well for sparse and delayed rewards, where standard exploration fails but few demonstrations with high rewards are available.
• We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations.
• We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning.
2 REVIEW OF RUDDER
Basic insight: Q-functions for complex tasks are step functions. Complex tasks are typically composed of sub-tasks. Therefore the Q-function of an optimal policy resembles a step function. The Q-function is the expected future return and it increases (i.e, makes a step) when a sub-task is completed. Identifying large steps in the Q-function speeds up learning since it allows (i) to increase the return by performing actions that cause the step and (ii) to sample episodes with a larger return for learning.
An approximation to the Q-function must predict the expected future return for every state-action pair. However, a Q-function that resembles a step-function is mostly constant. Therefore predictions are only necessary at the steps. We have to identify the relevant state-actions that cause the steps and then predict the size of the steps. An LSTM network (Hochreiter, 1991; Hochreiter & Schmidhuber, 1995; 1997a;b) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells. Consequently, LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed. Therefore, both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode.
Reward Redistribution. We consider episodic Markov decision processes (MDPs), i.e., the reward is only given once at the end of the sequence. The Q-function is assumed to be a step function, that is, the task can be decomposed into sub-tasks (see previous paragraph). Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward. Since the Q-function of an optimal policy is not known, we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work. The differences in predictions determine the reward redistribution. The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most. Fortunately, just identifying the largest steps even with poor predictions speeds up learning considerably. See Figure 1 for a description of the reward redistribution.
Learning methods based on reward redistribution. The redistributed reward serves as reward for a subsequent learning method: (A) The Q-values can be directly estimated (Arjona-Medina et al., 2019), which is used in the experiments for the artificial tasks and BC pre-training for MineCraft. (B) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization (PPO) (Schulman et al., 2018), which is used in the MineCraft experiments. (C) Redistributed rewards can serve for temporal difference learning like Q-learning (Watkins, 1989).
LSTM models for reward redistribution. RUDDER uses an LSTM model for predicting the future return. The reward redistribution is the difference between two subsequent predictions. If a stateaction pair increases the prediction of the return, then it is immediately rewarded. Using state-action sub-sequences (s, a)0:t = (s0, a0, . . . , st, at), the redistributed reward is Rt+1 = g((s, a)0:t) − g((s, a)0:t−1), where g is an LSTM model that predicts the return of the episode. The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most.
3 ALIGN-RUDDER: RUDDER WITH FEW DEMONSTRATIONS
In bioinformatics, sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship (Needleman & Wunsch, 1970; Smith & Waterman, 1981). The result of the alignment of multiple sequences is a profile model. The profile model is a consensus sequence, a frequency matrix, or a Position-Specific Scoring Matrix (PSSM) (Stormo et al., 1982). New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model.
Align-RUDDER uses such alignment techniques to align two or more high return demonstrations. For the alignment, we assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other analog to being evolutionary related. If the agent generates a state-action sequence (s, a)0:t−1, then this sequence is aligned to the profile model g giving a score g((s, a)0:t−1). The next action of the agent extends the state-action sequence by one state-action pair (st, at). The extended sequence (s, a)0:t is also aligned to the profile model g giving another score g((s, a)0:t). The redistributed reward Rt+1 is the difference of these scores: Rt+1 = g((s, a)0:t)− g((s, a)0:t−1) (see Eq. (1)). This difference indicates how much of the return is gained or lost by a adding another sequence element.
Align-RUDDER scores how close an agent follows an underlying strategy, which has been extracted by the profile model. Similar to the LSTM model, we identify the largest steps in the Q-function via relevant events determined by the profile model. Therefore, redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees. RUDDER’s theory for reward redistribution is valid for LSTM, other recurrent networks, attention mechanisms, or sequence and profile models.
Advantages of alignment compared to LSTM. Learning an LSTM model is severely limited when very few demonstrations are available. First, LSTM is known to require a large number of samples to generalize to new sequences. In contrast, sequence alignment requires only two examples to generalize well as known from bioinformatics. Second, expert demonstrations have high rewards.
Therefore random demonstrations with very low rewards have to be generated. LSTM does not generalize well when only these extreme reward cases can be observed in the training set. In contrast, sequence alignment only uses examples that are closely related; that is, they belong to the same category (expert demonstrations).
Reward Redistribution by Sequence Alignment. The new reward redistribution approach consists of five steps, see Fig. 3: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model like a PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1). In the following, the five steps of Align-RUDDER’s reward redistribution are outlined. For the interested reader, each step is detailed in Sec. A.3 in the appendix. Finally, in Sec. A.7.3 in the appendix, we illustrate these five steps on the example of Minecraft.
(I) Defining Events. Instead of states, we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal. An event is defined as a cluster of state differences. We use similarity-based clustering like affinity propagation (AP) (Frey & Dueck, 2007). If states are only enumerated, we suggest to use the “successor representation” (Dayan, 1993) or “successor features” (Barreto et al., 2017). We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation.
A sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e (the event) and ignoring the actions. Alignment techniques from bioinformatics assume sequences composed of a few events, e.g. 20 events. If there are too many events, good fitting alignments cannot be distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978).
(II) Determining the Alignment Scoring System. A scoring matrix S with entries si,j determines the score for aligning event i with j. A priori, we only know that a relevant event should be aligned to itself but not to other events. Therefore, we set si,j = 1/pi for i = j and si,j = α for i 6= j. Here, pi is the relative frequency of event i in the demonstrations. α is a hyper-parameter, which is typically a small negative number. This scoring scheme encourages alignment of rare events, for which pi is small. For more details see Appendix Sec. A.3.
(III) Multiple sequence alignment (MSA). An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i,j,i<j ∑L t=0 si,j,ti,tj ,t in an alignment, where si,j,ti,tj ,t is the score at alignment
column t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length, since gaps make the alignment longer than the length of each sequence. We use ClustalW (Thompson et al., 1994) for MSA. MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations. This guiding tree allows to identify multiple strategies. For more details see Appendix Sec. A.3.
(IV) Position-Specific Scoring Matrix (PSSM) and MSA profile model. From the alignment, we construct a profile model as a) column-wise event probabilities and b) a PSSM (Stormo et al., 1982). The PSSM is a column-wise scoring matrix to align new sequences to the profile model. More details are given in Appendix Sec. A.3.
(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L l=0 sl,tl . Here, sl,tl is the alignment score for event etl at position l in the alignment. Alignment gaps are columns to which no event was aligned, which have tl = T + 1 with gap penalty sl,T+1. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is
Rt+1 = (S(τt)− S(τt−1)) C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1, (1)
where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] with S(τ−1) = 0. The original return of
the sequence τ is G̃0 = ∑T t=0 R̃t+1 and the expectation of the return over demonstrations is Edemo. The constant C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence (Arjona-Medina et al., 2019) by G0 = ∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past: Rt+1 = h((s, a)0:t).
Sub-tasks. The reward redistribution identifies sub-tasks as alignment positions with high redistributed rewards. These sub-tasks are indicated by high scores s in the PSSM. Reward redistribution also determines the terminal states of sub-tasks since it assigns rewards for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the redistributed reward is Markov. For redistributed Markov reward, options (Sutton et al., 1999), MAXQ (Dietterich, 2000), or recursive option composition (Silver & Ciosek, 2012) can be used.
Higher Order Markov Reward Redistributions. Align-RUDDER may lead to higher-order Markov redistribution. Corollary 1 in the appendix states that the optimality criterion from Theorem 2 in Arjona-Medina et al. (2019) also holds for higher-order Markov reward redistributions. If the expected redistributed higher-order Markov reward is the difference of Q-values. In that case the redistribution is optimal, and there is no delayed reward. Furthermore, the optimal policies are the same as for the original problem. This corollary is the motivation for redistributing the reward to the steps in the Q-function. In the Appendix, Corollary 2 states that under a condition, an optimal higher-order reward redistribution can be expressed as the difference of Q-values.
4 EXPERIMENTS
Align-RUDDER is compared on three artificial tasks with sparse & delayed rewards and few demonstrations to Behavioral Cloning with Q-learning (BC+Q), Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), RUDDER (LSTM), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018). GAIL (Ho & Ermon, 2016) failed to solve the two artificial tasks, as reported previously for similar tasks (Reddy et al., 2020). Then, we test Align-RUDDER on the complex MineCraft ObtainDiamond task with episodic rewards (Guss et al., 2019b). All experiments use finite time MDPs with γ = 1 and episodic reward. More details are in Appendix Sec. A.6.
Alignment vs LSTM in 1D key-chest environment. We use a 1D key-chest environment to show the effectiveness of sequence alignment in a low data regime compared to an LSTM model. The agent has to collect the key and then open the chest to get a positive reward at the last timestep. See Appendix Fig. A.9 for a schematic representation of the environment. As the key-events (important state-action pairs) in this environment are known, we can compute the key-event detection rate of
a reward redistribution model. A key event is detected if the redistributed reward of an important state-action pair is larger than the average redistributed reward in the sequence. We train the reward redistribution models with 2, 5, and 10 training episodes and test on 1000 test episodes, averaged over ten trials. Align-RUDDER significantly outperforms LSTM (RUDDER) for detecting these key events in all cases, with an average key-event detection rate of 0.96 for sequence alignment vs. 0.46 for the LSTM models overall dataset sizes. See Appendix Fig. A.10 for the detailed results.
Artificial tasks (I) and (II). They are variations of the gridworld rooms example (Sutton et al., 1999), where cells are the MDP states. In our setting, the states do not have to be time-aware for ensuring stationary optimal policies, but the unobserved used-up time introduces a random effect. The grid is divided into rooms. The agent’s goal is to reach a target from an initial state with the fewest steps. It has to cross different rooms, which are connected by doors, except for the first room, which is only connected to the second room by a teleportation portal. The portal is introduced to avoid BC initialization alone, solving the task. It enforces that going to the portal entry cells is learned when they are at positions not observed in demonstrations. At every location, the agent can move up, down, left, right. The state transitions are stochastic. An episode ends after T = 200 time steps. Suppose the agent arrives at the target. In that case, it goes into an absorbing state where it stays until T = 200 without receiving further rewards. The reward is only given at the end of the episode. Demonstrations are generated by an optimal policy with a 0.2 exploration rate.
The five steps of Align-RUDDER’s reward redistribution are: (1) Events are clusters of states obtained by Affinity Propagation using as similarity the successor representation based on demonstrations. (2) The scoring matrix is obtained according to (II), using = 0 and setting all off-diagonal values of the scoring matrix to −1. (3) ClustalW is used for the MSA of the demonstrations with zero gap penalties and no biological options. (4) The MSA supplies a profile model and a PSSM as in (IV). (5) Sequences generated by the agent are mapped to sequences of events according to (I). The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. The reward redistribution determines sub-tasks like doors or portal arrival. The sub-tasks partition the Q-table into sub-tables that represent a sub-agent. However, we optimize a single Q-table in these experiments. Defining sub-tasks has no effect on learning in the tabular case.
All compared methods learn a Q-table and use an -greedy policy with = 0.2. The Q-table is initialized by behavioral cloning (BC). The state-action pairs which are not initialized since they are not visited in the demonstrations get an initialization by drawing a sample from a normal distribution. Align-RUDDER learns the Q-table via RUDDER’s Q-value estimation (learning method (A) from
Sec.2). For BC+Q, RUDDER (LSTM), SQIL, and DQfD a Q-table is learned by Q-learning. Hyperparameters are selected via grid search using the same amount of time for each method. For different numbers of demonstrations, performance is measured by the number of episodes to achieve 80% of the average return of the demonstrations. A Wilcoxon rank-sum test determines the significance of performance differences between Align-RUDDER and the other methods.
Task (I) environment is a 12×12 gridworld with four rooms. The target is in room #4, and the start is in room #1 with 20 portal entry locations. The state contains the portal entry for each episode. Fig. 5 shows the number of episodes required for achieving 80% of the average reward of the demonstrations for different numbers of demonstrations. Results are averaged over 100 trials. Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−10). Task (II) is a 12×24 gridworld with eight rooms: target in room #8, and start in room #1 with 20 portal entry locations. Fig. 5 shows the results with settings as in Task (I). Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−19). We also conduct an ablation study to study performance of Align-RUDDER, while changing various parameters, like environment stochasticity (See Sec. A.6.4) and number of clusters (See Sec. A.6.5).
MineCraft. We further test Align-RUDDER on MineCraft ObtainDiamond task from the MineRL dataset (Guss et al., 2019b). We do not use intermediate rewards given by achieving subgoals from the challenge, since Align-RUDDER is supposed to discover such sub-goals automatically via reward redistribution. We only give a reward for mining the diamond. This requires resource gathering and tool building in a hierarchical way. To the best of our knowledge, no pure learning method (sub-goals are also learned) has mined a diamond yet (Scheller et al., 2020). The dataset contains demonstrations which are insufficient to directly learn a single policy (117 demonstrations, 67 mined a diamond).
Implementation: (1) A state consists of visual input and an inventory (incl. equip state). Both inputs are normalized to the same information, that is, the same number of components and the same variance. We cluster the differences of consecutive states (Arjona-Medina et al., 2019). Very large clusters are removed, and small merged, giving 19 clusters corresponding to events, which are characterized by inventory changes. Finally, demonstrations are mapped to sequences of events. (2) The scoring matrix is computed according to (II). (3) The ten shortest demonstrations that obtained a diamond are aligned by ClustalW with zero gap penalties and no biological options. (4) The multiple alignments gives a profile model and a PSSM. (5) The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. Based on the reward redistribution, we define sub-goals. Sub-goals are identified as profile model positions that obtain an average redistributed reward above a threshold for the demonstrations. Demonstration subsequences between sub-goals are considered as demonstrations for the sub-tasks. New sub-sequences generated by the agent are aligned to the profile model to determine whether a sub-goal is achieved. The redistributed reward between two sub-goals is given at the end of the sub-sequence, therefore, the sub-tasks also have an episodic reward. Fig. 4 shows how sub-goals are identified. Sub-agents are pre-trained on the demonstrations for the sub-tasks using BC, and further trained in the environment using Proximal Policy Optimization (PPO) (Schulman et al., 2018). BC pre-training corresponds to RUDDER’s Q-value estimation (learning method (A) from above), while PPO corresponds to RUDDER’s PPO training (learning method (B) from above).
Our main agent can perform all actions but additionally can execute sub-agents and learns via the redistributed reward. The main agent corresponds to and is treated like a Manager module (Vezhnevets et al., 2017). The main agent is initialized by executing sub-agents according to the alignment but can deviate from this strategy. When a sub-agent successfully completes its task, the main agent executes the next sub-agent according to the alignment. More details can be found in Appendix Sec. A.7.1. Using only ten demonstrations, Align-RUDDER is able to learn to mine a diamond. A diamond is obtained in 0.1% of the cases. With 0.5 success probability for each of the 31 extracted sub-tasks (skilled agents not random agents), the resulting success rate for mining the diamond would be 4.66× 10−10. Tab. 1 shows a comparison of methods on the MineCraft MineRL dataset by the maximum item score (Milani et al., 2020). Results are taken from (Milani et al., 2020), in particular from Figure 2, and completed by (Skrynnik et al., 2019; Kanervisto et al., 2020; Scheller et al., 2020). Align-RUDDER was not evaluated during the MineCraft MineRL challenge, but it follows the timesteps limit (8 million) imposed by the challenge. Align-RUDDER did not receive the intermediate rewards provided by the challenge that hint at sub-tasks, thus tries to solve a more difficult task. Recently, ForgER++ (Skrynnik et al., 2020) was able to mine a diamond in 0.0667% of the cases. We do not include it in Table 1 as it did not have any limitations on the number of timesteps. Also, ForgER++ generates sub-goals for MineCraft using a heuristic, while Align-RUDDER uses redistributed reward to automatically obtain sub-goals.
Analysis of MineCraft Agent Behaviour. For each agent and its sub-task, we estimate the success rate and its improvement during fine-tuning by averaging over return of multiple runs (see Fig. 6). For earlier sub-tasks, the agent has a relatively higher sub-task success rate. This also corresponds to the agent having access to much more data for earlier sub-tasks. During learning from demonstrations, much less data is available for training for later sub-tasks, as not all expert demonstrations achieve the later tasks. During online training using reinforcement learning, an agent has to successfully complete all earlier sub-tasks to generate trajectories for later sub-tasks. This is exponentially difficult. Lack of demonstrations and difficulty of the learned agent to generate data for later sub-tasks leads to degradation of the success rate in MineCraft.
5 RELATED WORK
Learning from demonstrations has been widely studied over the last 50 years (Billard et al., 2008). An example is imitation learning, which uses supervised techniques when the number of demonstrations is large enough (Michie et al., 1990; Pomerleau, 1991; Michie & Camacho, 1994; Schaal, 1996; Kakade & Langford, 2002). However, policies trained with imitation learning tend to drift away from demonstration trajectories due to a distribution shift (Ross & Bagnell, 2010). This effect can be mitigated (III et al., 2009; Ross & Bagnell, 2010; Ross et al., 2011; Judah et al., 2014; Sun et al., 2017;
2018). Many approaches use demonstrations for initialization, e.g. of policy networks (Taylor et al., 2011; Silver et al., 2016), value function networks (Hester et al., 2017; 2018), both networks (Zhang & Ma, 2018; Nair et al., 2018), or an experience replay buffer (Hosu & Rebedea, 2016). Beyond initialization, demonstrations are used to define constraints (Kim et al., 2013), generate sub-goals (Eysenbach et al., 2019), enforce regularization (Reddy et al., 2020), guide exploration (Subramanian et al., 2016; Jing et al., 2019), or shape rewards (Judah et al., 2014; Brys et al., 2015; Suay et al., 2016). Demonstrations may serve for inverse reinforcement learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016), which aims at learning a (non-sparse) reward function that best explains the demonstrations. Learning reward functions requires a large number of demonstrations (Syed & Schapire, 2007; Ziebart et al., 2008; Silva et al., 2019). Some approaches rely on few-shot or/and meta learning (Duan et al., 2017; Finn et al., 2017; Zhou et al., 2020). However, few-shot and meta learning demand a large set of auxiliary tasks or prerecorded data. Concluding, most methods that learn from demonstrations rely on the availability of many demonstrations (Khardon, 1999; Lopes et al., 2009), in particular, if using deep learning methods (Bengio & Lecun, 2007; Lakshminarayanan et al., 2016). Some methods can learn on few demonstrations like Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018).
6 DISCUSSION AND CONCLUSION
Discussion. Firstly, reward redistributions do not change the optimal policies (see Theorem 1 in Appendix). Thus, suboptimal reward redistributions due to alignment errors or choosing events that are non-essential for reaching the goal might not speed up learning, but also do not change the optimal policies. Secondly, while Align-RUDDER can speed up learning even in complex environments, the resulting performance depends on the quality of the alignment model. A low quality alignment model can arise from multiple factors, one of which is having large number ( 20) of distinct events. Clustering can be used to reduce the number of events, which could also lead to a low quality alignment model if too many relevant events are clustered together. While the optimal policy is not changed by poor demonstration alignment, the benefit of employing reward redistribution based on it diminishes. Thirdly, the alignment could fail if the demonstrations have different underlying strategies i.e no events are common in the demonstrations. We assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other and can be aligned. However, if no underlying strategy exists, then identifying those relevant events via alignment, which should receive high redistributed rewards, may fail. In this case, reward is given at sequence end, when the redistributed reward is corrected, which leads to an episodic reward without reducing the delay of the rewards and speeding up learning.
Conclusions. We have introduced Align-RUDDER to solve highly complex tasks with delayed and sparse reward from few demonstrations. We have shown experimentally that Align-RUDDER outperforms state of the art methods designed for learning from demonstrations in the regime of few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is, to the best of our knowledge, the first pure learning method to mine a diamond.
ETHICS STATEMENT
Impact on ML and related scientific fields. Our research has the potential to positively impact a wide variety of fields of life due to its general applicability. Most importantly, it has the potential to reduce the cost for training and deploying agents in real world applications and therefore enable systems that have not been possible until now.
However, any new development in machine learning can be applied for good or for bad. Our system can be used for medical applications where it can save life but it could be used for malevolent systems. It is the society that decides how new technology is employed. However, we as scientist have to inform the society and the decision makers about our technologies. We have to show the limits of our technology, to give ideas of possible applications, to point out possible misuse or erroneous operation of our new technology.
Impact on society. A big danger is that users rely too much on our new approach and use it without reflecting on the outcomes. For example, in medical treatment decisions doctors may rely on the technical system and push are the responsibility toward the machine: “The machine suggested this treatment, therefore it is not my fault”. Another example is self-driving cars where we see that drivers become more careless even if they are supposed to pay attention and keep the hands on the steering wheel. They trust too much in the technology, even if the technology does not justify this trust or is not mature.
Finally, our method can be deployed in companies for job automation. Therefore there is the danger that some people lose their jobs, particularly those whose work is to perform predictable and repetitive tasks. An often used example is the taxi driver who would lose their job because of self-driving cars. The same holds for many jobs in production industry where automation can replace jobs. However all industrialization led to loss of jobs but new jobs have been created.
Consequences of failures of the method. Depending on the application area, a failure of this method might be of lesser concern, such as a failed execution of a computer program. If our method is employed within a larger automation system, a failure can result in damages such as a car accident. However, this holds for almost all reinforcement learning methods, and usage and testing falls within the responsibility of the application area. We note that in this work, the method was only used in computer game environments.
Leveraging of biases in the data and potential discrimination. Our proposed method relies on human demonstrations and thereby human decisions, which are usually strongly biased. As almost all machine learning methods trained on human-influenced data, our method could learn to use and exploit those biases and make similar decisions (Solaiman et al., 2019). Therefore, the responsible use of our method depends on a careful selection of the training data and awareness of the potential biases within those.
REPRODUCIBILITY STATEMENT
Code for experiments on the FourRooms and EightRooms environment is included as supplementary material. The README contains step-by-step instructions to set up an environment and run the experiments. We have specified all the training details ex. hyperparameters and how they were chosen in the Appendix (See Section A.6). We trained 100 replicates for each datapoint of the first set of experiments and are shown in Fig. 5. Using the code in the supplementary material, it is quite easy to reproduce our results for these experiments.
We also include code for the experiments done for MineCraft in the supplementary materials. All the preprocessing steps, hyperparameters and other implementation details are given in the Appendix (See Section A.7).
We also provide a deeper overview of the RUDDER (Arjona-Medina et al., 2019) theory in the Appendix (See Section A.2) as it is important for many design choices in Align-RUDDER.
Finally, a video showcasing the MineCraft agent is also provided as supplementary material.
A APPENDIX
CONTENTS OF THE APPENDIX
A.1 Introduction to the Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A.2 Review Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A.3 The Five Steps of Align-RUDDER’s Reward Redistribution . . . . . . . . . . . . . 25
A.4 Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
A.5 Extended Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A.6 Artificial Task Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.1 Hyperparameter Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.2 Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.3 Artificial Task p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A.6.4 Stochastic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
A.6.5 Changing number of Clusters . . . . . . . . . . . . . . . . . . . . . . . . 32
A.6.6 Key-Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
A.7 Minecraft Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.7.1 MineCraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.7.2 Related Work and Steps Towards a General Agent . . . . . . . . . . . . . 34
A.7.3 The Five Steps of Align-RUDDER Demonstrated on Minecraft . . . . . . 36
A.7.4 Implementation of our Algorithm for Minecraft . . . . . . . . . . . . . . . 38
A.7.5 Policy and Value Network Architecture . . . . . . . . . . . . . . . . . . . 39
A.7.6 Imitation Learning of Sub-Task Agents . . . . . . . . . . . . . . . . . . . 40
A.7.7 Reinforcement Learning on Sub-Task Agents . . . . . . . . . . . . . . . . 41
A.8 Reproducing the Artificial Task Results . . . . . . . . . . . . . . . . . . . . . . . 41
A.9 Software Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.10 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
LIST OF FIGURES
A.2 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 29
A.3 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30
A.4 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30
A.5 FourRooms and EightRooms environments . . . . . . . . . . . . . . . . . . . . . 31
A.6 Reward redistribution for the FourRooms and EightRooms environments . . . . . . 31
A.11 Step (I): Define events and map demonstrations into sequences of events. First, we extract the sequence of states from human demonstrations, transform images into feature vectors using a pre-trained network and transform them into a sequence of consecutive state deltas (concatenating image feature vectors and inventory states). We cluster the resulting state deltas and remove clusters with a large number of members and merge smaller clusters. In the case of demonstrations for the ObtainDiamond task in Minecraft the resulting clusters correspond to obtaining specific resources and items required to solve the task. Then we map the demonstrations to sequences of events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A.12 Step (II): Construct a scoring matrix using event probabilities from demonstrations for diagonal elements and setting off-diagonal to a constant value. The scores in the diagonal position are proportional to the inverse of the event frequencies. Thus, aligning rare events has higher score. Darker colors signify higher score values. . . 36
A.13 Step (III) Perform multipe sequence alignment (MSA) of the demonstrations. The MSA algorithm maximizes the pairwise sum of scores of all alignments. The score of an alignment at each position is given by the scoring matrix. As the off-diagonal entries are negative, the algorithm will always try to align an event to itself, while giving preference to events which give higher scores. . . . . . . . . . . . . . . . . 37
A.14 Step (IV) Compute a position-specific scoring matrix (PSSM). This matrix can be computed using the MSA (Step (III)) and the scoring matrix (Step (II)). Every column entry is for a position from the MSA. The score at a position (column) and for an event (row) depends on the frequency of that event at that position in the MSA. For example, the event in the last position is present in all the sequences, and thus gets a high score at the last position. But it is absent in the remaining position, and thus gets a score of zero elsewhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
A.15 Step (V) A new sequence is aligned step by step to the profile model using the PSSM, resulting in an alignment score for each sub-sequence. The redistributed reward is then proportional to the difference of scores of subsequent alignments. . . . . . . . 37
A.16 Conceptual overview of our MineRL agent . . . . . . . . . . . . . . . . . . . . . . 38
A.17 Conceptual architecture of Align-RUDDER MineRL policy and value networks . . 39
A.18 Discretization and interpolation of camera angles . . . . . . . . . . . . . . . . . . 40
A.19 Mapping of clusters to letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
A.20 Trajectory replay given by an exemplary consensus . . . . . . . . . . . . . . . . . 41
A.1 INTRODUCTION TO THE APPENDIX
This is the appendix to the paper “Align-RUDDER: Learning from few Demonstrations by Reward Redistribution”. The appendix aims at supporting the main document and provides more detailed information about the implementation of our method for different tasks. The content of this document is summarized as follows:
• Section A.3 describes the five steps of Align-RUDDER’s reward redistribution in more detail. In particular, the scoring systems are described in more detail. • Section A.4 provides a brief overview of sequence alignment methods and the hyperparameters used in our experiments. • Section A.6 provides figures and tables to support the results of the experiments in Artificial Tasks (I) and (II). • Section A.7 explains in detail the experiments conducted in the Minecraft ObtainDiamond task.
A.2 REVIEW REWARD REDISTRIBUTION
Reward redistribution and return decomposition are concepts introduced in RUDDER but also apply to Align-RUDDER as it is a variant of RUDDER. Reward redistribution based on return decomposition eliminates – or at least mitigates – delays of rewards while preserving the same optimal policies. Align-RUDDER is justified by the theory of return decomposition and reward redistribution when using multiple sequence alignment for constructing a reward redistribution model. In this section, we review the concepts of return decomposition and reward redistribution.
Preliminaries. We consider a finite MDP defined by the 5-tuple P = (S,A,R, p, γ) where the state space S and the action space A are sets of finite states s and actions a and R the set of bounded rewards r. For a given time step t, the corresponding random variables are St, At and Rt+1. Furthermore, P has transition-reward distributions p(St+1 = s′, Rt+1 = r | St = s,At = a), and a discount factor γ ∈ (0, 1], which we keep at γ = 1. A Markov policy π(a | s) is a probability of an action a given a state s. We consider MDPs with finite time horizon or with an absorbing state. The discounted return of a sequence of length T at time t is Gt = ∑T−t k=0 γ
kRt+k+1. As usual, the Q-function for a given policy π is qπ(s, a) = Eπ [Gt | St = s,At = a]. Eπ[x | s, a] is the expectation of x, where the random variable is a sequence of states, actions, and rewards that is generated with transition-reward distribution p, policy π, and starting at (s, a). The goal is to find an optimal policy π∗ = argmax π Eπ[G0] maximizing the expected return at t = 0. We assume that the states s are time-aware (time t can be extracted from each state) in order to assure stationary optimal policies. According to Proposition 4.4.3 in (Puterman, 2005), a deterministic optimal policy π∗ exists.
Definitions. A sequence-Markov decision process (SDP) is defined as a decision process that has Markov transition probabilities but a reward probability that is not required to be Markov. Two SDPs P̃ and P with different reward probabilities are return-equivalent if they have the same expected return at t = 0 for each policy π, and strictly return-equivalent if they additionally have the same expected return for every episode. Since for every π the expected return at t = 0 is the same, return-equivalent SDPs have the same optimal policies. A reward redistribution is a procedure that —for a given sequence of a delayed reward SDP P̃— redistributes the realization or expectation of its return G̃0 along the sequence. This yields a new SDP P with R as random variable for the redistributed reward and the same optimal policies as P̃: Theorem 1 (Arjona-Medina et al. (2019)). Both the SDP P̃ with delayed reward R̃t+1 and the SDP P with redistributed reward Rt+1 have the same optimal policies.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
The delay of rewards is captured by the expected future rewards κ(m, t − 1) at time (t − 1). κ is defined as κ(m, t− 1) := Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1], that is, at time (t− 1) the expected sum of future rewards from Rt+1 to Rt+1+m but not the immediate reward Rt. A reward redistribution is defined to be optimal, if κ(T − t − 1, t) = 0 for 0 6 t 6 T − 1, which is equivalent to Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1): Theorem 2 (Arjona-Medina et al. (2019)). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a second order Markov reward redistribution, which ensures that P is return-equivalent to P̃ . For a specific π, the following two statements are equivalent:
(I) κ(T − t− 1, t) = 0, i.e. the reward redistribution is optimal,
(II) Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (2)
An optimal reward redistribution fulfills for 1 6 t 6 T and 0 6 m 6 T − t: κ(m, t− 1) = 0.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
This theorem shows that an optimal reward redistribution relies on steps q̃π(st, at)− q̃π(st−1, at−1) of the Q-function. Identifying the largest steps in the Q-function detects the largest rewards that have to be redistributed, which makes the largest progress towards obtaining an optimal reward redistribution.
Corollary 1 (Higher order Markov reward redistribution optimality conditions). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is return-equivalent to P̃ . If for a specific π
Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (3)
holds, then the higher order reward redistribution Rt+1 is optimal, that is, κ(T − t− 1, t) = 0.
Proof. The proof is just PART (II) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.
We assume that
Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) , (4)
where we abbreviate the expected Rt+1 by ht:
Eπ [Rt+1 | st−1, at−1, st, at] = ht . (5)
The expectations Eπ [. | st−1, at−1] like Eπ [ R̃T+1 | st−1, at−1 ] are expectations over all episodes
that contain the state-action pair (st−1, at−1) at time t−1. The expectations Eπ [. | st−1, at−1, st, at] like Eπ [ R̃T+1 | st−1, at−1, st, at ] are expectations over all episodes that contain the state-action
pairs (st−1, at−1) at time t− 1 and (st, at) at time t. The Q-values are defined as
q̃π(st, at) = Eπ [ T−t∑ k=0 R̃t+k+1 | st, at ] = Eπ [ R̃T+1 | st, at ] , (6)
qπ(st, at) = Eπ [ T−t∑ k=0 Rt+k+1 | st, at ] , (7)
which are expectations over all trajectories that contain (st, at) at time t. Since P̃ is Markov, for q̃π only the suffix trajectories beginning at (st, at) enter the expectation.
The definition of κ(m, t − 1) for 1 6 t 6 T and 0 6 m 6 T − t was κ(m, t − 1) = Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1]. We have to proof κ(T − t− 1, t) = 0.
First, we consider m = 0 and 1 6 t 6 T , therefore κ(0, t − 1) = Eπ [Rt+1 | st−1, at−1]. Since the original MDP P̃ has episodic reward, we have r̃(st−1, at−1) = E [ R̃t | st−1, at−1 ] = 0 for
1 6 t 6 T . Therefore, we obtain: q̃π(st−1, at−1) = r̃(st−1, at−1) + ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) (8)
= ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) .
Using this equation we obtain for 1 6 t 6 T :
κ(0, t− 1) = Eπ [Rt+1 | st−1, at−1] (9) = Est,at [q̃ π(st, at) − q̃π(st−1, at−1) | st−1, at−1]
= ∑ st,at p(st, at | st−1, at−1) (q̃π(st, at) − q̃π(st−1, at−1))
= q̃π(st−1, at−1) − ∑ st,at p(st, at | st−1, at−1) q̃π(st−1, at−1) = q̃π(st−1, at−1) − q̃π(st−1, at−1) = 0 .
Next, we consider the expectation of ∑m τ=0Rt+1+τ for 1 6 t 6 T and 1 6 m 6 T − t (for m > 0)
κ(m, t− 1) = Eπ [ m∑ τ=0 Rt+1+τ | st−1, at−1 ] (10)
= Eπ [ m∑ τ=0 (q̃π(sτ+t, aτ+t) − q̃π(sτ+t−1, aτ+t−1)) | st−1, at−1 ] = Eπ [q̃ π(st+m, at+m) − q̃π(st−1, at−1) | st−1, at−1]
= Eπ [ Eπ [ T∑
τ=t+m
R̃τ+1 | st+m, at+m ] | st−1, at−1 ]
− Eπ [ Eπ [ T∑
τ=t−1 R̃τ+1 | st−1, at−1
] | st−1, at−1 ] = Eπ [ R̃T+1 | st−1, at−1 ] − Eπ [ R̃T+1 | st−1, at−1
] = 0 .
We used that R̃t+1 = 0 for t < T .
For the particualr cases t = τ + 1 and m = T − t = T − τ − 1 we have
κ(T − τ − 1, τ) = 0 . (11)
That is exactly what we wanted to proof.
Corollary 1 explicitly states that the optimality criterion ensures an optimal reward redistribution even if the reward redistribution is higher order Markov. For Align-RUDDER we may obtain a higher order Markov reward redistribution due to the profile alignment of the sub-sequences.
Corollary 2 (Higher order Markov reward redistribution optimality representation). We assume a delayed reward MDP P̃ , with episodic reward and that a new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is strictly return-equivalent to P̃ . We assume that the reward redistribuition is optimal, that is, κ(T − t − 1, t) = 0. If the condition
Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] (12)
holds, then
Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) . (13)
Proof. By and large, the proof is PART (I) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.
We assume that the reward redistribution is optimal, that is,
κ(T − t− 1, t) = 0 . (14)
We abbreviate the expected Rt+1 by ht:
Eπ [Rt+1 | st−1, at−1, st, at] = ht . (15)
In (Arjona-Medina et al., 2019) Lemma A4 is as follows.
Lemma 1. Two strictly return-equivalent SDPs P̃ and P have the same expected return for each start state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :
Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] . (16)
The assumptions of Lemma 1 hold for for the delayed reward MDP P̃ and the redistributed reward SDP P , since a reward redistribution ensures strictly return-equivalent SDPs. Therefore for a given state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :
Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] (17)
with G0 = ∑T τ=0Rτ+1 and G̃0 = R̃T+1. The Markov property of the MDP P̃ ensures that the future reward from t+ 1 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:
Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] = Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] . (18)
According to Eq. (12), the future reward from t + 2 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:
Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] . (19)
Using these properties we obtain
q̃π(st, at) = Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] (20)
= Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] = Eπ [ R̃T+1 | s0, a0, . . . , st, at
] = Eπ
[ T∑ τ=0 R̃τ+1 | s0, a0, . . . , st, at ] = Eπ [ G̃0 | s0, a0, . . . , st, at
] = Eπ [G0 | s0, a0, . . . , st, at]
= Eπ [ T∑ τ=0 Rτ+1 | s0, a0, . . . , st, at ]
= Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] + t∑ τ=0 hτ
= Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] + t∑ τ=0 hτ
= κ(T − t− 1, t) + t∑
τ=0
hτ
= t∑ τ=0 hτ .
We used the optimality condition
κ(T − t− 1, t) = Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = 0 . (21)
It follows that
Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) . (22)
This is exactly what we wanted to proof.
This corollary shows that optimal reward redistributions can be expressed as difference of Q-values if Eq. (12) holds. Eq. (12) states that the past can be averaged out. However, there may exist optimal reward redistributions for which Eq. (12) does not hold.
If the reward redistribution is optimal, the Q-values of P are given by qπ(st, at) = q̃π(st, at) − ψπ(st) and therefore P̃ and P have the same advantage function: Theorem 3 (Arjona-Medina et al. (2019)). If the reward redistribution is optimal, then the Q-values of the SDP P are qπ(st, at) = r(st, at) and
qπ(st, at) = q̃ π(st, at) − Est−1,at−1 [q̃π(st−1, at−1) | st] = q̃π(st, at) − ψπ(st) . (23)
The SDP P and the original MDP P̃ have the same advantage function.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
For an optimal reward redistribution only the expectation of the immediate reward r(st, at) = Eπ [Rt+1 | st, at] must be estimated. This considerably simplifies learning. Learning methods according to Arjona-Medina et al. (2019). The redistributed reward serves as reward for a subsequent learning method, which can be Type A, B, and C as described in ArjonaMedina et al. (2019). Type A methods estimate the Q-values. They can be estimated directly according to Eq. (23) assuming an optimal redistribution (Type A variant i). Q-values can be corrected for a non-optimal reward redistribution by additionally estimating κ (Type A variant ii). Q-value estimation can use eligibility traces (Type A variant iii). Type B methods use the redistributed rewards for policy gradients like Proximal Policy Optimization (PPO) Schulman et al. (2018). Type C methods use TD learning like Q-learning Watkins (1989), where immediate and future reward must be drawn together as typically done. For all these learning methods, demonstrations can be used for initialization (e.g. experience replay buffer) or pre-training (e.g. policy network with behavioral cloning). Recently, the convergence of RUDDER learning methods has been proven under commonly used assumptions (Holzleitner et al., 2020).
Non-optimal reward redistribution and Align-RUDDER. According to Theorem 1, non-optimal reward redistributions do not change the optimal policies. The value κ(T − t− 1, t) measures the remaining delayed reward. The smaller κ is, the faster is the learning process. For Monte Carlo (MC) estimates, smaller κ reduces the variance of the future rewards, and, therefore the variance of the estimation. For temporal difference (TD) estimates, smaller κ reduces the amount of information that has to flow back. Align-RUDDER dramatically reduces the amount of delayed rewards by identifying key events via multiple sequence alignment, to which reward is redistributed. For an episodic MDP, a reward that is redistributed to time t reduces all κ(m, τ) with t 6 τ < T by the expectation of the reward. Therefore, in most cases Align-RUDDER makes κ-values much smaller.
A.3 THE FIVE STEPS OF ALIGN-RUDDER’S REWARD REDISTRIBUTION
The new reward redistribution approach consists of five steps, see Fig. A.1: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model and the PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1).
(I) Defining Events. Alignment techniques assume that sequences consist of few symbols, e.g. about 20 symbols, the events. It is crucial to keep the number of events small in order to increase the difference between a random alignment and an alignment of demonstrations. If there are many events, then two demonstrations might have few events that can be matched, which cannot be well distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978). The events can be the original state-action pairs, clusters thereof, or other representations of state-action pairs, e.g. indicating changes of inventory, health, energy, skills etc. In general, we define events as a cluster of states or state-actions. A sequence of events is obtained from a state-action sequence by substituting states or state-actions by their cluster identifier. In order to cluster states, a similarity measure between them is required. We suggest to use the “successor representation” (Dayan, 1993) of the states, which gives a similarity matrix based on how connected two states are given a policy. Successor representation have been used before (Machado et al., 2017; Ramesh et al., 2019) to obtain important events, for option learning. For computing the successor representation, we use the demonstrations combined with state-action sequences generated by a random policy. For high dimensional state spaces “successor features” (Barreto et al., 2017) can be used. We use similarity-based clustering methods like affinity propagation (AP) (Frey & Dueck, 2007). For AP the similarity matrix does not have to be symmetric and the number of clusters need not be known. State action pairs (s, a) are mapped to events e.
(II) Determining the Alignment Scoring System. Alignment algorithms distinguish similar sequences from dissimilar sequences using a scoring system. A scoring matrix S has entries si,j that give the score for aligning event i with j. The MSA score SMSA of a multiple sequence alignment is the sum of all pairwise scores: SMSA = ∑ i,j,i<j ∑L t=0 sxi,t,xj,t , where xi,t means that event xi,t is at position t for sequence τi = ei,0:T in the alignment, analog for xj,t and the sequence τj = ej,0:T , andL is the alignment length. Note thatL ≥ T and xi,t 6= ei,t, since gaps are present in the alignment. In the alignment, events should have the same probability of being aligned as they would have if we know the strategy and align demonstrations accordingly. The theory of high scoring segments gives a scoring scheme with these alignment probabilities (Karlin & Altschul, 1990; Karlin et al., 1990; Altschul et al., 1990). Event i is observed with probability pi in the demonstrations, therefore a random alignment aligns event i with j with probability pipj . An alignment algorithm maximizes the MSA score SMSA and, thereby, aligns events i and j with probability qij for demonstrations. High values of qij means that the MSA often aligns events i and j in the demonstrations using the scoring matrix S with entries si,j . According to Theorem 2 and Equation [3] in Karlin & Altschul (1990), asymptotically with the sequence length, we have si,j = ln(qij/(pipj))/λ∗, where λ∗ is the unique positive root of ∑n,n i=1,j=1 pipj exp(λsi,j) = 1 (Equation [4] in Karlin & Altschul (1990)). We can now choose a desired probability qij and then compute the scoring matrix S with entries si,j . High values of qij should indicate relevant events for the strategy. A priori, we only know that a relevant event should be aligned to itself, while we do not know which events are relevant. Therefore we set qij to large values for every i = j and to low values for i 6= j. Concretely, we set qij = pi − for i = j and qij = /(n− 1) for i 6= j, where n is the number of different possible events. Events with smaller pi receive a higher score si,i when aligned to themselves since this self-match is less often observed when randomly matching events (pipi is the probability of a random self-match). Any prior knowledge about events should be incorporated into qij .
(III) Multiple sequence alignment (MSA). MSA first produces pairwise alignments between all demonstrations. Then, a guiding tree (agglomerative hierarchical clustering) is produced via hierarchical clustering sequences, according to their pairwise alignment scores. Demonstrations which follow the same strategy appear in the same cluster in the guiding tree. Each cluster is aligned separately via MSA to address different strategies. However, if there is not a cluster of demonstrations, then the alignment will fail. MSA methods like ClustalW (Thompson et al., 1994) or MUSCLE (Edgar, 2004) can be used.
(IV) Position-Specific Scoring Matrix (PSSM) and Profile. From the final alignment, we construct a) an MSA profile (column-wise event frequencies qi,j) and b) a PSSM (Stormo et al., 1982) which is used for aligning new sequences to the profile of the MSA. To compute the PSSM (column-wise scores si,t), we apply Theorem 2 and Equation [3] in Karlin & Altschul (1990). Event i is observed with probability pi in the data. For each position t in the alignment, we compute qi,t, which indicates the frequency of event i at position t. The PSSM is si,t = ln(qi,t/pi)/λ∗t , where λ ∗ t is the single
unique positive root of ∑n i=1 pi exp(λsi,t) = 1 (Equation [1] in Karlin & Altschul (1990)). If we
align a new sequence that follows the underlying strategy (a new demonstration) to the profile model, we would see that event i is aligned to position t in the profile with probability qi,t.
(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L t=0 sxt,t. Here, si,t is the alignment score for event i and xt is the event of τ at position t in the alignment. L is the profile length, where L ≥ T and xt 6= et, because of gaps in the alignment. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is
Rt+1 = (S(τt) − S(τt−1))C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1,
(24) where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] and G̃0 = ∑T t=0 R̃t+1 is the original
return of the sequence τ and S(τt−1) = 0. Edemo is the expectation over demonstrations, and C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence, since G0 =∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past, that is, Rt+1 = h((s, a)0:t). For computational efficiency, the alignment of τt−1 can be extended to one for τt, like exact matches are extended to high-scoring sequence pairs with the BLAST algorithm (Altschul et al., 1990; 1997).
Sub-tasks. The reward redistribution identifies sub-tasks, which are alignment positions with high redistributed reward. It also determines the terminal states and automatically assigns reward for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the reward is Markov. For redistributed reward that is Markov, the option framework (Sutton et al., 1999), the MAXQ framework (Dietterich, 2000), or recursive composition of option models (Silver & Ciosek, 2012) can be used as subsequent approaches to hierarchical reinforcement learning.
A.4 SEQUENCE ALIGNMENT
In bioinformatics, sequence alignment identifies regions of significant similarity among different biological sequences to establish evolutionary relationships between those sequences. In 1970, Needleman and Wunsch proposed a global alignment method based on dynamic programming (Needleman & Wunsch, 1970). This approach ensures the best possible alignment given a substitution matrix, such as PAM (Dayhoff, 1978) or BLOSUM(Henikoff & Henikoff, 1992), and other parameters to penalize gaps in the alignment. The method of Needlemann and Wunsch is of O(mn) complexity both in memory and time, which could be prohibitive in long sequences like genomes. An optimization of this method by Hirschberg (1975), reduces memory to O(m+ n), but still requires O(mn) time.
Later, Smith and Waterman developed a local alignment method for sequences (Smith & Waterman, 1981). It is a variation of Needleman and Wunsch’s method, keeping the substitution matrix and the gap-scoring scheme but setting cells in the similarity matrix with negative scores to zero. The complexity for this algorithm is of O(n2M). Osamu Gotoh published an optimization of this method, running in O(mn) runtime (Gotoh, 1982).
The main difference between both methods is the following:
• The global alignment method by Needleman and Wunsch aligns the sequences fixing the first and the last position of both sequences. It attempts to align every symbol in the sequence, allowing some gaps, but the main purpose is to get a global alignment. This is especially useful when the two sequences are highly similar. For instance:
ATCGGATCGACTGGCTAGATCATCGCTGG CGAGCATC-ACTGTCT-GATCGACCTTAG * *** **** ** **** * * *
• As an alternative to global methods, the local method of Smith and Waterman aligns the sequences with a higher degree of freedom, allowing the alignment to start or end with gaps.
This is extremely useful when the two sequences are substantially dissimilar in general but suspected of having a highly related sub region.
ATCAAGGAGATCATCGCTGGACTGAGTGGCT----ACGTGGTATGT ATC----CGATCATCGCTGG-CTGATCGACCTTCTACGT------*** ************ **** * * ****
A.4.0.1 Multiple Sequence Alignment algorithms. The sequence alignment algorithms by Needleman and Wunsch and Smith and Waterman are limited to aligning two sequences. The approaches for generalizing these algorithms to multiple sequences can be classified into four categories:
• Exact methods (Wang & Jiang, 1994). • Progressive methods: ClustalW (Thompson et al., 1994), Clustal Omega (Sievers et al.,
2014), T-Coffee (Notredame et al., 2000). • Iterative and search algorithms: DIALIGN (Morgenstern, 2004), MultiAlign (Corpet, 1988). • Local methods: eMOTIF (Mccammon & Wolynes, 1998), PROSITE (Bairoch & Bucher,
1994).
For more details, visit Sequence Comparison: Theory and methods (Chao & Zhang, 2009).
In our experiments, we use ClustalW from Biopython (Cock et al., 2009) with the following parameters:
clustalw2 -ALIGN -CLUSTERING=UPGMA -NEGATIVE " \ "-INFILE={infile} -OUTFILE={outfile} " \ "-PWMATRIX={scores} -PWGAPOPEN=0 -PWGAPEXT=0 " \ "-MATRIX={scores} -GAPOPEN=0 -GAPEXT=0 -CASE=UPPER " \ "-NOPGAP -NOHGAP -MAXDIV=0 -ENDGAPS -NOVGAP " \ "-NEWTREE={outputtree} -TYPE=PROTEIN -OUTPUT=GDE
where the PWMATRIX and MATRIX are computed according to step (II) in Sec. 3 of the main paper.
A.5 EXTENDED RELATED WORK
Align-RUDDER allows to identify sub-goals and sub-tasks, therefore it is related to hierarchical reinforcement learning (HRL) approaches like the option framework (Sutton et al., 1999), | 1. What is the main contribution of the paper, and how does it improve upon previous methods?
2. What are the strengths and limitations of the proposed method, particularly regarding its ability to identify subgoals and redistribute rewards?
3. How does the reviewer assess the clarity and quality of the paper's explanation of the RUDDER approach and the Align-RUDDER technique?
4. What are the reviewer's concerns regarding Figure 1 and the characterization of RUDDER's EFR representation?
5. How does the reviewer evaluate the effectiveness of Align-RUDDER compared to other imitation learning methods, especially in tasks with sparse and delayed reward signals? | Summary Of The Paper
Review | Summary Of The Paper
The authors propose Align-RUDDER, which aligns given task demonstrations (sharing a common strategy) using a profile model, to identify common events as subgoals and redistibute reward to realize more efficient and effective RL. Align-RUDDER replaces RUDDER's replay buffer with task demonstrations and RUDDER's LSTM with a simpler profile model to maximally leverage high reward but scarce demonstrations in tasks characterized by sparse and delayed reward signals, which are notoriously difficult to explore effectively.
Review
Strengths:
Simple, effective and general technique for identifying subgoals with few expert demonstrations sharing a common strategy
Large gains over RUDDER and sota imitation learning methods on 2 gridworld tasks
The only technique that was able to mine a diamond in the minecraft challenge with automatically learned subgoals (infrequently)
Current Limitations:
As shown in figure 6, the success rate of Align-RUDDER is high, but then degrades rapidly in the last quarter of the minecraft task. An analysis of what elements of Align-RUDDER contributed to that degradation would be highly instructive in understanding the current limitations of the approach, and future research directions.
Align-RUDDER's "five steps" are described completely enough, but I feel that the presentation needs to be further improved for those not already familiar with profile models (many). Expanding figure 3 and better connecting the description with it would improve the paper substantially.
The assumption of a single underling strategy in Align-RUDDER seems like a significant limitation. A clustering step to identify multi-strategy profiles will be required in many situations. How common such situations are and how they would be detected and mitigated with e.g. with multi-strategy profiles is not adequately discussed.
Figure 1, which describes the basic intuitions behind the RUDDER approach, distributes all of the reward to earlier subtasks, leaving none for the end-goal, which seems like an issue...
Post-rebuttal comments:
Thank you to the authors for their response. I have looked at the other reviews and re-read parts of the updated paper, and I tend to agree with the more critical reviews wrt the following:
The explanation of RUDDER is still not complete or detailed enough to appreciate Align-RUDDER's relative strengths (and weaknesses), and thus the contributions of the paper.
The explanation of Align-RUDDER needs significant work wrt both the layering and quality of explanation, it is just not clear enough. Read the first sentence in (V) of section 3, does it make sense? There are many more examples of poorly crafted sentences. Perhaps more significantly, it just doesn't come together to explain the technique clearly. In my opinion it still needs to be completely revised.
In addition, following up on my concern with figure 1, I also take issue with the "Q-functions are step-functions statement", and the characterization around figure 1 that RUDDER makes all EFRs zero... These are not the correct characterizations. RUDDER is representing the EFR with a (sparse) set of reward advantage events through reward redistribution, and as such, the details around RUDDER become very important--perhaps with better "event detection" regularization, the LSTM approach is far more effective than the Align approach, which is constructed in a somewhat ad-hoc manner with a diverse set of tools.
Based on these considerations, the paper is still in need of substantial revision, and I have lowered my score. |
ICLR | Title
Align-RUDDER: Learning From Few Demonstrations by Reward Redistribution
Abstract
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
N/A
Reinforcement Learning algorithms require a large number of samples to solve complex tasks with sparse and delayed rewards. Complex tasks are often hierarchically composed of sub-tasks. Solving a sub-task increases the return expectation and leads to a step in the Q-function. RUDDER identifies these steps and then redistributes reward to them, thus immediately giving reward if sub-tasks are solved. Since the delay of rewards is reduced, learning is considerably sped up. However, for complex tasks, current exploration strategies struggle with discovering episodes with high rewards. Therefore, we assume that episodes with high rewards are given as demonstrations and do not have to be discovered by exploration. Unfortunately, the number of demonstrations is typically small and RUDDER’s LSTM as a deep learning model does not learn well on these few training samples. Hence, we introduce Align-RUDDER, which is RUDDER with two major modifications. First, Align-RUDDER assumes that episodes with high rewards are given as demonstrations, replacing RUDDER’s safe exploration and lessons replay buffer. Second, we substitute RUDDER’s LSTM model by a profile model that is obtained from multiple sequence alignment of demonstrations. Profile models can be constructed from as few as two demonstrations. Align-RUDDER uses reward redistribution to speed up learning by reducing the delay of rewards. Align-RUDDER outperforms competitors on complex artificial tasks with delayed rewards and few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is able to mine a diamond, though not frequently.
1 INTRODUCTION
Reinforcement learning algorithms struggle with learning complex tasks that have sparse and delayed rewards (Sutton & Barto, 2018; Rahmandad et al., 2009; Luoma et al., 2017). For delayed rewards, temporal difference (TD) suffers from vanishing information (Arjona-Medina et al., 2019). On the other hand Monte Carlo (MC) has high variance since it must average over all possible futures (ArjonaMedina et al., 2019). Monte-Carlo Tree Search (MCTS), used for Go and chess, can handle delayed and rare rewards since it has a perfect environment model (Silver et al., 2016; 2017). RUDDER (Arjona-Medina et al., 2019; 2018) has been shown to excel in model-free learning of policies when only sparse and delayed rewards are given. RUDDER requires episodes with high rewards to store them in its lessons replay buffer for learning a reward redistribution model like an LSTM network. However, for complex tasks, current exploration strategies find episodes with high rewards only after an incommensurate long time. Humans and animals obtain high reward episodes by teachers, role models, or prototypes. Along this line, we assume that episodes with high rewards are given as demonstrations. Since generating demonstrations is often tedious for humans and time-consuming for exploration strategies, typically, only a few demonstrations are available. However, RUDDER’s LSTM (Hochreiter, 1991; Hochreiter & Schmidhuber, 1997a) as a deep learning method requires many examples for learning. Therefore, we introduce Align-RUDDER, which replaces RUDDER’s LSTM with a profile model obtained from multiple sequence alignment (MSA) of the demonstrations. Profile models are well known in bioinformatics. They are used to score new sequences according to their sequence similarity to the aligned sequences. Like RUDDER also Align-RUDDER performs reward redistribution —using an alignment model—, which considerably speeds up learning even if only a few demonstrations are available.
Our main contributions are:
• We suggest a reinforcement algorithm that works well for sparse and delayed rewards, where standard exploration fails but few demonstrations with high rewards are available.
• We adopt multiple sequence alignment from bioinformatics to construct a reward redistribution technique that works with few demonstrations.
• We propose a method that uses alignment techniques and reward redistribution for identifying sub-goals and sub-tasks which in turn allow for hierarchical reinforcement learning.
2 REVIEW OF RUDDER
Basic insight: Q-functions for complex tasks are step functions. Complex tasks are typically composed of sub-tasks. Therefore the Q-function of an optimal policy resembles a step function. The Q-function is the expected future return and it increases (i.e, makes a step) when a sub-task is completed. Identifying large steps in the Q-function speeds up learning since it allows (i) to increase the return by performing actions that cause the step and (ii) to sample episodes with a larger return for learning.
An approximation to the Q-function must predict the expected future return for every state-action pair. However, a Q-function that resembles a step-function is mostly constant. Therefore predictions are only necessary at the steps. We have to identify the relevant state-actions that cause the steps and then predict the size of the steps. An LSTM network (Hochreiter, 1991; Hochreiter & Schmidhuber, 1995; 1997a;b) can identify relevant state-actions that open the input gate to store the size of the steps in the memory cells. Consequently, LSTM only updates its states and changes its return prediction when a new relevant state-action pair is observed. Therefore, both the change of the prediction and opening input gates indicate Q-function steps through an LSTM network that predicts the return of an episode.
Reward Redistribution. We consider episodic Markov decision processes (MDPs), i.e., the reward is only given once at the end of the sequence. The Q-function is assumed to be a step function, that is, the task can be decomposed into sub-tasks (see previous paragraph). Reward redistribution aims at giving the differences in the Q-function of an optimal policy as a new immediate reward. Since the Q-function of an optimal policy is not known, we approximate it by predicting the expected return by an LSTM network or by an alignment model in this work. The differences in predictions determine the reward redistribution. The prediction model will first identify the largest steps in the Q-function as they decrease the prediction error most. Fortunately, just identifying the largest steps even with poor predictions speeds up learning considerably. See Figure 1 for a description of the reward redistribution.
Learning methods based on reward redistribution. The redistributed reward serves as reward for a subsequent learning method: (A) The Q-values can be directly estimated (Arjona-Medina et al., 2019), which is used in the experiments for the artificial tasks and BC pre-training for MineCraft. (B) Redistributed rewards can serve for learning with policy gradients like Proximal Policy Optimization (PPO) (Schulman et al., 2018), which is used in the MineCraft experiments. (C) Redistributed rewards can serve for temporal difference learning like Q-learning (Watkins, 1989).
LSTM models for reward redistribution. RUDDER uses an LSTM model for predicting the future return. The reward redistribution is the difference between two subsequent predictions. If a stateaction pair increases the prediction of the return, then it is immediately rewarded. Using state-action sub-sequences (s, a)0:t = (s0, a0, . . . , st, at), the redistributed reward is Rt+1 = g((s, a)0:t) − g((s, a)0:t−1), where g is an LSTM model that predicts the return of the episode. The LSTM model learns at first to approximate the largest steps of the Q-function since they reduce the prediction error the most.
3 ALIGN-RUDDER: RUDDER WITH FEW DEMONSTRATIONS
In bioinformatics, sequence alignment identifies similarities between biological sequences to determine their evolutionary relationship (Needleman & Wunsch, 1970; Smith & Waterman, 1981). The result of the alignment of multiple sequences is a profile model. The profile model is a consensus sequence, a frequency matrix, or a Position-Specific Scoring Matrix (PSSM) (Stormo et al., 1982). New sequences can be aligned to a profile model and receive an alignment score that indicates how well the new sequences agree to the profile model.
Align-RUDDER uses such alignment techniques to align two or more high return demonstrations. For the alignment, we assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other analog to being evolutionary related. If the agent generates a state-action sequence (s, a)0:t−1, then this sequence is aligned to the profile model g giving a score g((s, a)0:t−1). The next action of the agent extends the state-action sequence by one state-action pair (st, at). The extended sequence (s, a)0:t is also aligned to the profile model g giving another score g((s, a)0:t). The redistributed reward Rt+1 is the difference of these scores: Rt+1 = g((s, a)0:t)− g((s, a)0:t−1) (see Eq. (1)). This difference indicates how much of the return is gained or lost by a adding another sequence element.
Align-RUDDER scores how close an agent follows an underlying strategy, which has been extracted by the profile model. Similar to the LSTM model, we identify the largest steps in the Q-function via relevant events determined by the profile model. Therefore, redistributing the reward by sequence alignment fits into the RUDDER framework with all its theoretical guarantees. RUDDER’s theory for reward redistribution is valid for LSTM, other recurrent networks, attention mechanisms, or sequence and profile models.
Advantages of alignment compared to LSTM. Learning an LSTM model is severely limited when very few demonstrations are available. First, LSTM is known to require a large number of samples to generalize to new sequences. In contrast, sequence alignment requires only two examples to generalize well as known from bioinformatics. Second, expert demonstrations have high rewards.
Therefore random demonstrations with very low rewards have to be generated. LSTM does not generalize well when only these extreme reward cases can be observed in the training set. In contrast, sequence alignment only uses examples that are closely related; that is, they belong to the same category (expert demonstrations).
Reward Redistribution by Sequence Alignment. The new reward redistribution approach consists of five steps, see Fig. 3: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model like a PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1). In the following, the five steps of Align-RUDDER’s reward redistribution are outlined. For the interested reader, each step is detailed in Sec. A.3 in the appendix. Finally, in Sec. A.7.3 in the appendix, we illustrate these five steps on the example of Minecraft.
(I) Defining Events. Instead of states, we consider differences of consecutive states to detect a change caused by an important event like achieving a sub-goal. An event is defined as a cluster of state differences. We use similarity-based clustering like affinity propagation (AP) (Frey & Dueck, 2007). If states are only enumerated, we suggest to use the “successor representation” (Dayan, 1993) or “successor features” (Barreto et al., 2017). We use the demonstrations combined with state-action sequences generated by a random policy to construct the successor representation.
A sequence of events is obtained from a state-action sequence by mapping states s to its cluster identifier e (the event) and ignoring the actions. Alignment techniques from bioinformatics assume sequences composed of a few events, e.g. 20 events. If there are too many events, good fitting alignments cannot be distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978).
(II) Determining the Alignment Scoring System. A scoring matrix S with entries si,j determines the score for aligning event i with j. A priori, we only know that a relevant event should be aligned to itself but not to other events. Therefore, we set si,j = 1/pi for i = j and si,j = α for i 6= j. Here, pi is the relative frequency of event i in the demonstrations. α is a hyper-parameter, which is typically a small negative number. This scoring scheme encourages alignment of rare events, for which pi is small. For more details see Appendix Sec. A.3.
(III) Multiple sequence alignment (MSA). An MSA algorithm maximizes the sum of all pairwise scores SMSA = ∑ i,j,i<j ∑L t=0 si,j,ti,tj ,t in an alignment, where si,j,ti,tj ,t is the score at alignment
column t for aligning the event at position ti in sequence i to the event at position tj in sequence j. L ≥ T is the alignment length, since gaps make the alignment longer than the length of each sequence. We use ClustalW (Thompson et al., 1994) for MSA. MSA constructs a guiding tree by agglomerative hierarchical clustering of pairwise alignments between all demonstrations. This guiding tree allows to identify multiple strategies. For more details see Appendix Sec. A.3.
(IV) Position-Specific Scoring Matrix (PSSM) and MSA profile model. From the alignment, we construct a profile model as a) column-wise event probabilities and b) a PSSM (Stormo et al., 1982). The PSSM is a column-wise scoring matrix to align new sequences to the profile model. More details are given in Appendix Sec. A.3.
(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L l=0 sl,tl . Here, sl,tl is the alignment score for event etl at position l in the alignment. Alignment gaps are columns to which no event was aligned, which have tl = T + 1 with gap penalty sl,T+1. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is
Rt+1 = (S(τt)− S(τt−1)) C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1, (1)
where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] with S(τ−1) = 0. The original return of
the sequence τ is G̃0 = ∑T t=0 R̃t+1 and the expectation of the return over demonstrations is Edemo. The constant C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence (Arjona-Medina et al., 2019) by G0 = ∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past: Rt+1 = h((s, a)0:t).
Sub-tasks. The reward redistribution identifies sub-tasks as alignment positions with high redistributed rewards. These sub-tasks are indicated by high scores s in the PSSM. Reward redistribution also determines the terminal states of sub-tasks since it assigns rewards for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the redistributed reward is Markov. For redistributed Markov reward, options (Sutton et al., 1999), MAXQ (Dietterich, 2000), or recursive option composition (Silver & Ciosek, 2012) can be used.
Higher Order Markov Reward Redistributions. Align-RUDDER may lead to higher-order Markov redistribution. Corollary 1 in the appendix states that the optimality criterion from Theorem 2 in Arjona-Medina et al. (2019) also holds for higher-order Markov reward redistributions. If the expected redistributed higher-order Markov reward is the difference of Q-values. In that case the redistribution is optimal, and there is no delayed reward. Furthermore, the optimal policies are the same as for the original problem. This corollary is the motivation for redistributing the reward to the steps in the Q-function. In the Appendix, Corollary 2 states that under a condition, an optimal higher-order reward redistribution can be expressed as the difference of Q-values.
4 EXPERIMENTS
Align-RUDDER is compared on three artificial tasks with sparse & delayed rewards and few demonstrations to Behavioral Cloning with Q-learning (BC+Q), Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), RUDDER (LSTM), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018). GAIL (Ho & Ermon, 2016) failed to solve the two artificial tasks, as reported previously for similar tasks (Reddy et al., 2020). Then, we test Align-RUDDER on the complex MineCraft ObtainDiamond task with episodic rewards (Guss et al., 2019b). All experiments use finite time MDPs with γ = 1 and episodic reward. More details are in Appendix Sec. A.6.
Alignment vs LSTM in 1D key-chest environment. We use a 1D key-chest environment to show the effectiveness of sequence alignment in a low data regime compared to an LSTM model. The agent has to collect the key and then open the chest to get a positive reward at the last timestep. See Appendix Fig. A.9 for a schematic representation of the environment. As the key-events (important state-action pairs) in this environment are known, we can compute the key-event detection rate of
a reward redistribution model. A key event is detected if the redistributed reward of an important state-action pair is larger than the average redistributed reward in the sequence. We train the reward redistribution models with 2, 5, and 10 training episodes and test on 1000 test episodes, averaged over ten trials. Align-RUDDER significantly outperforms LSTM (RUDDER) for detecting these key events in all cases, with an average key-event detection rate of 0.96 for sequence alignment vs. 0.46 for the LSTM models overall dataset sizes. See Appendix Fig. A.10 for the detailed results.
Artificial tasks (I) and (II). They are variations of the gridworld rooms example (Sutton et al., 1999), where cells are the MDP states. In our setting, the states do not have to be time-aware for ensuring stationary optimal policies, but the unobserved used-up time introduces a random effect. The grid is divided into rooms. The agent’s goal is to reach a target from an initial state with the fewest steps. It has to cross different rooms, which are connected by doors, except for the first room, which is only connected to the second room by a teleportation portal. The portal is introduced to avoid BC initialization alone, solving the task. It enforces that going to the portal entry cells is learned when they are at positions not observed in demonstrations. At every location, the agent can move up, down, left, right. The state transitions are stochastic. An episode ends after T = 200 time steps. Suppose the agent arrives at the target. In that case, it goes into an absorbing state where it stays until T = 200 without receiving further rewards. The reward is only given at the end of the episode. Demonstrations are generated by an optimal policy with a 0.2 exploration rate.
The five steps of Align-RUDDER’s reward redistribution are: (1) Events are clusters of states obtained by Affinity Propagation using as similarity the successor representation based on demonstrations. (2) The scoring matrix is obtained according to (II), using = 0 and setting all off-diagonal values of the scoring matrix to −1. (3) ClustalW is used for the MSA of the demonstrations with zero gap penalties and no biological options. (4) The MSA supplies a profile model and a PSSM as in (IV). (5) Sequences generated by the agent are mapped to sequences of events according to (I). The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. The reward redistribution determines sub-tasks like doors or portal arrival. The sub-tasks partition the Q-table into sub-tables that represent a sub-agent. However, we optimize a single Q-table in these experiments. Defining sub-tasks has no effect on learning in the tabular case.
All compared methods learn a Q-table and use an -greedy policy with = 0.2. The Q-table is initialized by behavioral cloning (BC). The state-action pairs which are not initialized since they are not visited in the demonstrations get an initialization by drawing a sample from a normal distribution. Align-RUDDER learns the Q-table via RUDDER’s Q-value estimation (learning method (A) from
Sec.2). For BC+Q, RUDDER (LSTM), SQIL, and DQfD a Q-table is learned by Q-learning. Hyperparameters are selected via grid search using the same amount of time for each method. For different numbers of demonstrations, performance is measured by the number of episodes to achieve 80% of the average return of the demonstrations. A Wilcoxon rank-sum test determines the significance of performance differences between Align-RUDDER and the other methods.
Task (I) environment is a 12×12 gridworld with four rooms. The target is in room #4, and the start is in room #1 with 20 portal entry locations. The state contains the portal entry for each episode. Fig. 5 shows the number of episodes required for achieving 80% of the average reward of the demonstrations for different numbers of demonstrations. Results are averaged over 100 trials. Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−10). Task (II) is a 12×24 gridworld with eight rooms: target in room #8, and start in room #1 with 20 portal entry locations. Fig. 5 shows the results with settings as in Task (I). Align-RUDDER significantly outperforms all other methods, for 6 10 demonstrations (p-values < 10−19). We also conduct an ablation study to study performance of Align-RUDDER, while changing various parameters, like environment stochasticity (See Sec. A.6.4) and number of clusters (See Sec. A.6.5).
MineCraft. We further test Align-RUDDER on MineCraft ObtainDiamond task from the MineRL dataset (Guss et al., 2019b). We do not use intermediate rewards given by achieving subgoals from the challenge, since Align-RUDDER is supposed to discover such sub-goals automatically via reward redistribution. We only give a reward for mining the diamond. This requires resource gathering and tool building in a hierarchical way. To the best of our knowledge, no pure learning method (sub-goals are also learned) has mined a diamond yet (Scheller et al., 2020). The dataset contains demonstrations which are insufficient to directly learn a single policy (117 demonstrations, 67 mined a diamond).
Implementation: (1) A state consists of visual input and an inventory (incl. equip state). Both inputs are normalized to the same information, that is, the same number of components and the same variance. We cluster the differences of consecutive states (Arjona-Medina et al., 2019). Very large clusters are removed, and small merged, giving 19 clusters corresponding to events, which are characterized by inventory changes. Finally, demonstrations are mapped to sequences of events. (2) The scoring matrix is computed according to (II). (3) The ten shortest demonstrations that obtained a diamond are aligned by ClustalW with zero gap penalties and no biological options. (4) The multiple alignments gives a profile model and a PSSM. (5) The reward is redistributed via differences of profile alignment scores of consecutive sub-sequences according to Eq. (1) using the PSSM. Based on the reward redistribution, we define sub-goals. Sub-goals are identified as profile model positions that obtain an average redistributed reward above a threshold for the demonstrations. Demonstration subsequences between sub-goals are considered as demonstrations for the sub-tasks. New sub-sequences generated by the agent are aligned to the profile model to determine whether a sub-goal is achieved. The redistributed reward between two sub-goals is given at the end of the sub-sequence, therefore, the sub-tasks also have an episodic reward. Fig. 4 shows how sub-goals are identified. Sub-agents are pre-trained on the demonstrations for the sub-tasks using BC, and further trained in the environment using Proximal Policy Optimization (PPO) (Schulman et al., 2018). BC pre-training corresponds to RUDDER’s Q-value estimation (learning method (A) from above), while PPO corresponds to RUDDER’s PPO training (learning method (B) from above).
Our main agent can perform all actions but additionally can execute sub-agents and learns via the redistributed reward. The main agent corresponds to and is treated like a Manager module (Vezhnevets et al., 2017). The main agent is initialized by executing sub-agents according to the alignment but can deviate from this strategy. When a sub-agent successfully completes its task, the main agent executes the next sub-agent according to the alignment. More details can be found in Appendix Sec. A.7.1. Using only ten demonstrations, Align-RUDDER is able to learn to mine a diamond. A diamond is obtained in 0.1% of the cases. With 0.5 success probability for each of the 31 extracted sub-tasks (skilled agents not random agents), the resulting success rate for mining the diamond would be 4.66× 10−10. Tab. 1 shows a comparison of methods on the MineCraft MineRL dataset by the maximum item score (Milani et al., 2020). Results are taken from (Milani et al., 2020), in particular from Figure 2, and completed by (Skrynnik et al., 2019; Kanervisto et al., 2020; Scheller et al., 2020). Align-RUDDER was not evaluated during the MineCraft MineRL challenge, but it follows the timesteps limit (8 million) imposed by the challenge. Align-RUDDER did not receive the intermediate rewards provided by the challenge that hint at sub-tasks, thus tries to solve a more difficult task. Recently, ForgER++ (Skrynnik et al., 2020) was able to mine a diamond in 0.0667% of the cases. We do not include it in Table 1 as it did not have any limitations on the number of timesteps. Also, ForgER++ generates sub-goals for MineCraft using a heuristic, while Align-RUDDER uses redistributed reward to automatically obtain sub-goals.
Analysis of MineCraft Agent Behaviour. For each agent and its sub-task, we estimate the success rate and its improvement during fine-tuning by averaging over return of multiple runs (see Fig. 6). For earlier sub-tasks, the agent has a relatively higher sub-task success rate. This also corresponds to the agent having access to much more data for earlier sub-tasks. During learning from demonstrations, much less data is available for training for later sub-tasks, as not all expert demonstrations achieve the later tasks. During online training using reinforcement learning, an agent has to successfully complete all earlier sub-tasks to generate trajectories for later sub-tasks. This is exponentially difficult. Lack of demonstrations and difficulty of the learned agent to generate data for later sub-tasks leads to degradation of the success rate in MineCraft.
5 RELATED WORK
Learning from demonstrations has been widely studied over the last 50 years (Billard et al., 2008). An example is imitation learning, which uses supervised techniques when the number of demonstrations is large enough (Michie et al., 1990; Pomerleau, 1991; Michie & Camacho, 1994; Schaal, 1996; Kakade & Langford, 2002). However, policies trained with imitation learning tend to drift away from demonstration trajectories due to a distribution shift (Ross & Bagnell, 2010). This effect can be mitigated (III et al., 2009; Ross & Bagnell, 2010; Ross et al., 2011; Judah et al., 2014; Sun et al., 2017;
2018). Many approaches use demonstrations for initialization, e.g. of policy networks (Taylor et al., 2011; Silver et al., 2016), value function networks (Hester et al., 2017; 2018), both networks (Zhang & Ma, 2018; Nair et al., 2018), or an experience replay buffer (Hosu & Rebedea, 2016). Beyond initialization, demonstrations are used to define constraints (Kim et al., 2013), generate sub-goals (Eysenbach et al., 2019), enforce regularization (Reddy et al., 2020), guide exploration (Subramanian et al., 2016; Jing et al., 2019), or shape rewards (Judah et al., 2014; Brys et al., 2015; Suay et al., 2016). Demonstrations may serve for inverse reinforcement learning (Ng & Russell, 2000; Abbeel & Ng, 2004; Ho & Ermon, 2016), which aims at learning a (non-sparse) reward function that best explains the demonstrations. Learning reward functions requires a large number of demonstrations (Syed & Schapire, 2007; Ziebart et al., 2008; Silva et al., 2019). Some approaches rely on few-shot or/and meta learning (Duan et al., 2017; Finn et al., 2017; Zhou et al., 2020). However, few-shot and meta learning demand a large set of auxiliary tasks or prerecorded data. Concluding, most methods that learn from demonstrations rely on the availability of many demonstrations (Khardon, 1999; Lopes et al., 2009), in particular, if using deep learning methods (Bengio & Lecun, 2007; Lakshminarayanan et al., 2016). Some methods can learn on few demonstrations like Soft Q Imitation Learning (SQIL) (Reddy et al., 2020), Generative Adversarial Imitation Learning (GAIL) (Ho & Ermon, 2016), and Deep Q-learning from Demonstrations (DQfD) (Hester et al., 2018).
6 DISCUSSION AND CONCLUSION
Discussion. Firstly, reward redistributions do not change the optimal policies (see Theorem 1 in Appendix). Thus, suboptimal reward redistributions due to alignment errors or choosing events that are non-essential for reaching the goal might not speed up learning, but also do not change the optimal policies. Secondly, while Align-RUDDER can speed up learning even in complex environments, the resulting performance depends on the quality of the alignment model. A low quality alignment model can arise from multiple factors, one of which is having large number ( 20) of distinct events. Clustering can be used to reduce the number of events, which could also lead to a low quality alignment model if too many relevant events are clustered together. While the optimal policy is not changed by poor demonstration alignment, the benefit of employing reward redistribution based on it diminishes. Thirdly, the alignment could fail if the demonstrations have different underlying strategies i.e no events are common in the demonstrations. We assume that the demonstrations follow the same underlying strategy, therefore they are similar to each other and can be aligned. However, if no underlying strategy exists, then identifying those relevant events via alignment, which should receive high redistributed rewards, may fail. In this case, reward is given at sequence end, when the redistributed reward is corrected, which leads to an episodic reward without reducing the delay of the rewards and speeding up learning.
Conclusions. We have introduced Align-RUDDER to solve highly complex tasks with delayed and sparse reward from few demonstrations. We have shown experimentally that Align-RUDDER outperforms state of the art methods designed for learning from demonstrations in the regime of few demonstrations. On the MineCraft ObtainDiamond task, Align-RUDDER is, to the best of our knowledge, the first pure learning method to mine a diamond.
ETHICS STATEMENT
Impact on ML and related scientific fields. Our research has the potential to positively impact a wide variety of fields of life due to its general applicability. Most importantly, it has the potential to reduce the cost for training and deploying agents in real world applications and therefore enable systems that have not been possible until now.
However, any new development in machine learning can be applied for good or for bad. Our system can be used for medical applications where it can save life but it could be used for malevolent systems. It is the society that decides how new technology is employed. However, we as scientist have to inform the society and the decision makers about our technologies. We have to show the limits of our technology, to give ideas of possible applications, to point out possible misuse or erroneous operation of our new technology.
Impact on society. A big danger is that users rely too much on our new approach and use it without reflecting on the outcomes. For example, in medical treatment decisions doctors may rely on the technical system and push are the responsibility toward the machine: “The machine suggested this treatment, therefore it is not my fault”. Another example is self-driving cars where we see that drivers become more careless even if they are supposed to pay attention and keep the hands on the steering wheel. They trust too much in the technology, even if the technology does not justify this trust or is not mature.
Finally, our method can be deployed in companies for job automation. Therefore there is the danger that some people lose their jobs, particularly those whose work is to perform predictable and repetitive tasks. An often used example is the taxi driver who would lose their job because of self-driving cars. The same holds for many jobs in production industry where automation can replace jobs. However all industrialization led to loss of jobs but new jobs have been created.
Consequences of failures of the method. Depending on the application area, a failure of this method might be of lesser concern, such as a failed execution of a computer program. If our method is employed within a larger automation system, a failure can result in damages such as a car accident. However, this holds for almost all reinforcement learning methods, and usage and testing falls within the responsibility of the application area. We note that in this work, the method was only used in computer game environments.
Leveraging of biases in the data and potential discrimination. Our proposed method relies on human demonstrations and thereby human decisions, which are usually strongly biased. As almost all machine learning methods trained on human-influenced data, our method could learn to use and exploit those biases and make similar decisions (Solaiman et al., 2019). Therefore, the responsible use of our method depends on a careful selection of the training data and awareness of the potential biases within those.
REPRODUCIBILITY STATEMENT
Code for experiments on the FourRooms and EightRooms environment is included as supplementary material. The README contains step-by-step instructions to set up an environment and run the experiments. We have specified all the training details ex. hyperparameters and how they were chosen in the Appendix (See Section A.6). We trained 100 replicates for each datapoint of the first set of experiments and are shown in Fig. 5. Using the code in the supplementary material, it is quite easy to reproduce our results for these experiments.
We also include code for the experiments done for MineCraft in the supplementary materials. All the preprocessing steps, hyperparameters and other implementation details are given in the Appendix (See Section A.7).
We also provide a deeper overview of the RUDDER (Arjona-Medina et al., 2019) theory in the Appendix (See Section A.2) as it is important for many design choices in Align-RUDDER.
Finally, a video showcasing the MineCraft agent is also provided as supplementary material.
A APPENDIX
CONTENTS OF THE APPENDIX
A.1 Introduction to the Appendix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A.2 Review Reward Redistribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
A.3 The Five Steps of Align-RUDDER’s Reward Redistribution . . . . . . . . . . . . . 25
A.4 Sequence Alignment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
A.5 Extended Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
A.6 Artificial Task Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.1 Hyperparameter Selection . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.2 Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.6.3 Artificial Task p-values . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A.6.4 Stochastic Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
A.6.5 Changing number of Clusters . . . . . . . . . . . . . . . . . . . . . . . . 32
A.6.6 Key-Event Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
A.7 Minecraft Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.7.1 MineCraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.7.2 Related Work and Steps Towards a General Agent . . . . . . . . . . . . . 34
A.7.3 The Five Steps of Align-RUDDER Demonstrated on Minecraft . . . . . . 36
A.7.4 Implementation of our Algorithm for Minecraft . . . . . . . . . . . . . . . 38
A.7.5 Policy and Value Network Architecture . . . . . . . . . . . . . . . . . . . 39
A.7.6 Imitation Learning of Sub-Task Agents . . . . . . . . . . . . . . . . . . . 40
A.7.7 Reinforcement Learning on Sub-Task Agents . . . . . . . . . . . . . . . . 41
A.8 Reproducing the Artificial Task Results . . . . . . . . . . . . . . . . . . . . . . . 41
A.9 Software Libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.10 Compute . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
LIST OF FIGURES
A.2 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 29
A.3 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30
A.4 Clusters formed in the FourRooms and EightRooms environment . . . . . . . . . . 30
A.5 FourRooms and EightRooms environments . . . . . . . . . . . . . . . . . . . . . 31
A.6 Reward redistribution for the FourRooms and EightRooms environments . . . . . . 31
A.11 Step (I): Define events and map demonstrations into sequences of events. First, we extract the sequence of states from human demonstrations, transform images into feature vectors using a pre-trained network and transform them into a sequence of consecutive state deltas (concatenating image feature vectors and inventory states). We cluster the resulting state deltas and remove clusters with a large number of members and merge smaller clusters. In the case of demonstrations for the ObtainDiamond task in Minecraft the resulting clusters correspond to obtaining specific resources and items required to solve the task. Then we map the demonstrations to sequences of events. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A.12 Step (II): Construct a scoring matrix using event probabilities from demonstrations for diagonal elements and setting off-diagonal to a constant value. The scores in the diagonal position are proportional to the inverse of the event frequencies. Thus, aligning rare events has higher score. Darker colors signify higher score values. . . 36
A.13 Step (III) Perform multipe sequence alignment (MSA) of the demonstrations. The MSA algorithm maximizes the pairwise sum of scores of all alignments. The score of an alignment at each position is given by the scoring matrix. As the off-diagonal entries are negative, the algorithm will always try to align an event to itself, while giving preference to events which give higher scores. . . . . . . . . . . . . . . . . 37
A.14 Step (IV) Compute a position-specific scoring matrix (PSSM). This matrix can be computed using the MSA (Step (III)) and the scoring matrix (Step (II)). Every column entry is for a position from the MSA. The score at a position (column) and for an event (row) depends on the frequency of that event at that position in the MSA. For example, the event in the last position is present in all the sequences, and thus gets a high score at the last position. But it is absent in the remaining position, and thus gets a score of zero elsewhere. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
A.15 Step (V) A new sequence is aligned step by step to the profile model using the PSSM, resulting in an alignment score for each sub-sequence. The redistributed reward is then proportional to the difference of scores of subsequent alignments. . . . . . . . 37
A.16 Conceptual overview of our MineRL agent . . . . . . . . . . . . . . . . . . . . . . 38
A.17 Conceptual architecture of Align-RUDDER MineRL policy and value networks . . 39
A.18 Discretization and interpolation of camera angles . . . . . . . . . . . . . . . . . . 40
A.19 Mapping of clusters to letters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
A.20 Trajectory replay given by an exemplary consensus . . . . . . . . . . . . . . . . . 41
A.1 INTRODUCTION TO THE APPENDIX
This is the appendix to the paper “Align-RUDDER: Learning from few Demonstrations by Reward Redistribution”. The appendix aims at supporting the main document and provides more detailed information about the implementation of our method for different tasks. The content of this document is summarized as follows:
• Section A.3 describes the five steps of Align-RUDDER’s reward redistribution in more detail. In particular, the scoring systems are described in more detail. • Section A.4 provides a brief overview of sequence alignment methods and the hyperparameters used in our experiments. • Section A.6 provides figures and tables to support the results of the experiments in Artificial Tasks (I) and (II). • Section A.7 explains in detail the experiments conducted in the Minecraft ObtainDiamond task.
A.2 REVIEW REWARD REDISTRIBUTION
Reward redistribution and return decomposition are concepts introduced in RUDDER but also apply to Align-RUDDER as it is a variant of RUDDER. Reward redistribution based on return decomposition eliminates – or at least mitigates – delays of rewards while preserving the same optimal policies. Align-RUDDER is justified by the theory of return decomposition and reward redistribution when using multiple sequence alignment for constructing a reward redistribution model. In this section, we review the concepts of return decomposition and reward redistribution.
Preliminaries. We consider a finite MDP defined by the 5-tuple P = (S,A,R, p, γ) where the state space S and the action space A are sets of finite states s and actions a and R the set of bounded rewards r. For a given time step t, the corresponding random variables are St, At and Rt+1. Furthermore, P has transition-reward distributions p(St+1 = s′, Rt+1 = r | St = s,At = a), and a discount factor γ ∈ (0, 1], which we keep at γ = 1. A Markov policy π(a | s) is a probability of an action a given a state s. We consider MDPs with finite time horizon or with an absorbing state. The discounted return of a sequence of length T at time t is Gt = ∑T−t k=0 γ
kRt+k+1. As usual, the Q-function for a given policy π is qπ(s, a) = Eπ [Gt | St = s,At = a]. Eπ[x | s, a] is the expectation of x, where the random variable is a sequence of states, actions, and rewards that is generated with transition-reward distribution p, policy π, and starting at (s, a). The goal is to find an optimal policy π∗ = argmax π Eπ[G0] maximizing the expected return at t = 0. We assume that the states s are time-aware (time t can be extracted from each state) in order to assure stationary optimal policies. According to Proposition 4.4.3 in (Puterman, 2005), a deterministic optimal policy π∗ exists.
Definitions. A sequence-Markov decision process (SDP) is defined as a decision process that has Markov transition probabilities but a reward probability that is not required to be Markov. Two SDPs P̃ and P with different reward probabilities are return-equivalent if they have the same expected return at t = 0 for each policy π, and strictly return-equivalent if they additionally have the same expected return for every episode. Since for every π the expected return at t = 0 is the same, return-equivalent SDPs have the same optimal policies. A reward redistribution is a procedure that —for a given sequence of a delayed reward SDP P̃— redistributes the realization or expectation of its return G̃0 along the sequence. This yields a new SDP P with R as random variable for the redistributed reward and the same optimal policies as P̃: Theorem 1 (Arjona-Medina et al. (2019)). Both the SDP P̃ with delayed reward R̃t+1 and the SDP P with redistributed reward Rt+1 have the same optimal policies.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
The delay of rewards is captured by the expected future rewards κ(m, t − 1) at time (t − 1). κ is defined as κ(m, t− 1) := Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1], that is, at time (t− 1) the expected sum of future rewards from Rt+1 to Rt+1+m but not the immediate reward Rt. A reward redistribution is defined to be optimal, if κ(T − t − 1, t) = 0 for 0 6 t 6 T − 1, which is equivalent to Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1): Theorem 2 (Arjona-Medina et al. (2019)). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a second order Markov reward redistribution, which ensures that P is return-equivalent to P̃ . For a specific π, the following two statements are equivalent:
(I) κ(T − t− 1, t) = 0, i.e. the reward redistribution is optimal,
(II) Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (2)
An optimal reward redistribution fulfills for 1 6 t 6 T and 0 6 m 6 T − t: κ(m, t− 1) = 0.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
This theorem shows that an optimal reward redistribution relies on steps q̃π(st, at)− q̃π(st−1, at−1) of the Q-function. Identifying the largest steps in the Q-function detects the largest rewards that have to be redistributed, which makes the largest progress towards obtaining an optimal reward redistribution.
Corollary 1 (Higher order Markov reward redistribution optimality conditions). We assume a delayed reward MDP P̃ , with episodic reward. A new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is return-equivalent to P̃ . If for a specific π
Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) (3)
holds, then the higher order reward redistribution Rt+1 is optimal, that is, κ(T − t− 1, t) = 0.
Proof. The proof is just PART (II) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.
We assume that
Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) , (4)
where we abbreviate the expected Rt+1 by ht:
Eπ [Rt+1 | st−1, at−1, st, at] = ht . (5)
The expectations Eπ [. | st−1, at−1] like Eπ [ R̃T+1 | st−1, at−1 ] are expectations over all episodes
that contain the state-action pair (st−1, at−1) at time t−1. The expectations Eπ [. | st−1, at−1, st, at] like Eπ [ R̃T+1 | st−1, at−1, st, at ] are expectations over all episodes that contain the state-action
pairs (st−1, at−1) at time t− 1 and (st, at) at time t. The Q-values are defined as
q̃π(st, at) = Eπ [ T−t∑ k=0 R̃t+k+1 | st, at ] = Eπ [ R̃T+1 | st, at ] , (6)
qπ(st, at) = Eπ [ T−t∑ k=0 Rt+k+1 | st, at ] , (7)
which are expectations over all trajectories that contain (st, at) at time t. Since P̃ is Markov, for q̃π only the suffix trajectories beginning at (st, at) enter the expectation.
The definition of κ(m, t − 1) for 1 6 t 6 T and 0 6 m 6 T − t was κ(m, t − 1) = Eπ [ ∑m τ=0Rt+1+τ | st−1, at−1]. We have to proof κ(T − t− 1, t) = 0.
First, we consider m = 0 and 1 6 t 6 T , therefore κ(0, t − 1) = Eπ [Rt+1 | st−1, at−1]. Since the original MDP P̃ has episodic reward, we have r̃(st−1, at−1) = E [ R̃t | st−1, at−1 ] = 0 for
1 6 t 6 T . Therefore, we obtain: q̃π(st−1, at−1) = r̃(st−1, at−1) + ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) (8)
= ∑ st,at p(st, at | st−1, at−1) q̃π(st, at) .
Using this equation we obtain for 1 6 t 6 T :
κ(0, t− 1) = Eπ [Rt+1 | st−1, at−1] (9) = Est,at [q̃ π(st, at) − q̃π(st−1, at−1) | st−1, at−1]
= ∑ st,at p(st, at | st−1, at−1) (q̃π(st, at) − q̃π(st−1, at−1))
= q̃π(st−1, at−1) − ∑ st,at p(st, at | st−1, at−1) q̃π(st−1, at−1) = q̃π(st−1, at−1) − q̃π(st−1, at−1) = 0 .
Next, we consider the expectation of ∑m τ=0Rt+1+τ for 1 6 t 6 T and 1 6 m 6 T − t (for m > 0)
κ(m, t− 1) = Eπ [ m∑ τ=0 Rt+1+τ | st−1, at−1 ] (10)
= Eπ [ m∑ τ=0 (q̃π(sτ+t, aτ+t) − q̃π(sτ+t−1, aτ+t−1)) | st−1, at−1 ] = Eπ [q̃ π(st+m, at+m) − q̃π(st−1, at−1) | st−1, at−1]
= Eπ [ Eπ [ T∑
τ=t+m
R̃τ+1 | st+m, at+m ] | st−1, at−1 ]
− Eπ [ Eπ [ T∑
τ=t−1 R̃τ+1 | st−1, at−1
] | st−1, at−1 ] = Eπ [ R̃T+1 | st−1, at−1 ] − Eπ [ R̃T+1 | st−1, at−1
] = 0 .
We used that R̃t+1 = 0 for t < T .
For the particualr cases t = τ + 1 and m = T − t = T − τ − 1 we have
κ(T − τ − 1, τ) = 0 . (11)
That is exactly what we wanted to proof.
Corollary 1 explicitly states that the optimality criterion ensures an optimal reward redistribution even if the reward redistribution is higher order Markov. For Align-RUDDER we may obtain a higher order Markov reward redistribution due to the profile alignment of the sub-sequences.
Corollary 2 (Higher order Markov reward redistribution optimality representation). We assume a delayed reward MDP P̃ , with episodic reward and that a new SDP P is obtained by a higher order Markov reward redistribution. The reward redistribution ensures that P is strictly return-equivalent to P̃ . We assume that the reward redistribuition is optimal, that is, κ(T − t − 1, t) = 0. If the condition
Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] (12)
holds, then
Eπ [Rt+1 | st−1, at−1, st, at] = q̃π(st, at) − q̃π(st−1, at−1) . (13)
Proof. By and large, the proof is PART (I) of the proof of Theorem 2 in (Arjona-Medina et al., 2019). We repeat it here for completeness.
We assume that the reward redistribution is optimal, that is,
κ(T − t− 1, t) = 0 . (14)
We abbreviate the expected Rt+1 by ht:
Eπ [Rt+1 | st−1, at−1, st, at] = ht . (15)
In (Arjona-Medina et al., 2019) Lemma A4 is as follows.
Lemma 1. Two strictly return-equivalent SDPs P̃ and P have the same expected return for each start state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :
Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] . (16)
The assumptions of Lemma 1 hold for for the delayed reward MDP P̃ and the redistributed reward SDP P , since a reward redistribution ensures strictly return-equivalent SDPs. Therefore for a given state-action sub-sequence (s0, a0, . . . , st, at), 0 6 t 6 T :
Eπ [ G̃0 | s0, a0, . . . , st, at ] = Eπ [G0 | s0, a0, . . . , st, at] (17)
with G0 = ∑T τ=0Rτ+1 and G̃0 = R̃T+1. The Markov property of the MDP P̃ ensures that the future reward from t+ 1 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:
Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] = Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] . (18)
According to Eq. (12), the future reward from t + 2 on is independent of the past sub-sequence s0, a0, . . . , st−1, at−1:
Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] . (19)
Using these properties we obtain
q̃π(st, at) = Eπ [ T−t∑ τ=0 R̃t+1+τ | st, at ] (20)
= Eπ [ T−t∑ τ=0 R̃t+1+τ | s0, a0, . . . , st, at ] = Eπ [ R̃T+1 | s0, a0, . . . , st, at
] = Eπ
[ T∑ τ=0 R̃τ+1 | s0, a0, . . . , st, at ] = Eπ [ G̃0 | s0, a0, . . . , st, at
] = Eπ [G0 | s0, a0, . . . , st, at]
= Eπ [ T∑ τ=0 Rτ+1 | s0, a0, . . . , st, at ]
= Eπ [ T−t−1∑ τ=0 Rt+2+τ | s0, a0, . . . , st, at ] + t∑ τ=0 hτ
= Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] + t∑ τ=0 hτ
= κ(T − t− 1, t) + t∑
τ=0
hτ
= t∑ τ=0 hτ .
We used the optimality condition
κ(T − t− 1, t) = Eπ [ T−t−1∑ τ=0 Rt+2+τ | st, at ] = 0 . (21)
It follows that
Eπ [Rt+1 | st−1, at−1, st, at] = ht = q̃π(st, at) − q̃π(st−1, at−1) . (22)
This is exactly what we wanted to proof.
This corollary shows that optimal reward redistributions can be expressed as difference of Q-values if Eq. (12) holds. Eq. (12) states that the past can be averaged out. However, there may exist optimal reward redistributions for which Eq. (12) does not hold.
If the reward redistribution is optimal, the Q-values of P are given by qπ(st, at) = q̃π(st, at) − ψπ(st) and therefore P̃ and P have the same advantage function: Theorem 3 (Arjona-Medina et al. (2019)). If the reward redistribution is optimal, then the Q-values of the SDP P are qπ(st, at) = r(st, at) and
qπ(st, at) = q̃ π(st, at) − Est−1,at−1 [q̃π(st−1, at−1) | st] = q̃π(st, at) − ψπ(st) . (23)
The SDP P and the original MDP P̃ have the same advantage function.
Proof. The proof can be found in (Arjona-Medina et al., 2019).
For an optimal reward redistribution only the expectation of the immediate reward r(st, at) = Eπ [Rt+1 | st, at] must be estimated. This considerably simplifies learning. Learning methods according to Arjona-Medina et al. (2019). The redistributed reward serves as reward for a subsequent learning method, which can be Type A, B, and C as described in ArjonaMedina et al. (2019). Type A methods estimate the Q-values. They can be estimated directly according to Eq. (23) assuming an optimal redistribution (Type A variant i). Q-values can be corrected for a non-optimal reward redistribution by additionally estimating κ (Type A variant ii). Q-value estimation can use eligibility traces (Type A variant iii). Type B methods use the redistributed rewards for policy gradients like Proximal Policy Optimization (PPO) Schulman et al. (2018). Type C methods use TD learning like Q-learning Watkins (1989), where immediate and future reward must be drawn together as typically done. For all these learning methods, demonstrations can be used for initialization (e.g. experience replay buffer) or pre-training (e.g. policy network with behavioral cloning). Recently, the convergence of RUDDER learning methods has been proven under commonly used assumptions (Holzleitner et al., 2020).
Non-optimal reward redistribution and Align-RUDDER. According to Theorem 1, non-optimal reward redistributions do not change the optimal policies. The value κ(T − t− 1, t) measures the remaining delayed reward. The smaller κ is, the faster is the learning process. For Monte Carlo (MC) estimates, smaller κ reduces the variance of the future rewards, and, therefore the variance of the estimation. For temporal difference (TD) estimates, smaller κ reduces the amount of information that has to flow back. Align-RUDDER dramatically reduces the amount of delayed rewards by identifying key events via multiple sequence alignment, to which reward is redistributed. For an episodic MDP, a reward that is redistributed to time t reduces all κ(m, τ) with t 6 τ < T by the expectation of the reward. Therefore, in most cases Align-RUDDER makes κ-values much smaller.
A.3 THE FIVE STEPS OF ALIGN-RUDDER’S REWARD REDISTRIBUTION
The new reward redistribution approach consists of five steps, see Fig. A.1: (I) Define events to turn episodes of state-action sequences into sequences of events. (II) Determine an alignment scoring scheme, so that relevant events are aligned to each other. (III) Perform a multiple sequence alignment (MSA) of the demonstrations. (IV) Compute the profile model and the PSSM. (V) Redistribute the reward: Each sub-sequence τt of a new episode τ is aligned to the profile. The redistributed reward Rt+1 is proportional to the difference of scores S based on the PSSM given in step (IV), i.e. Rt+1 ∝ S(τt)− S(τt−1).
(I) Defining Events. Alignment techniques assume that sequences consist of few symbols, e.g. about 20 symbols, the events. It is crucial to keep the number of events small in order to increase the difference between a random alignment and an alignment of demonstrations. If there are many events, then two demonstrations might have few events that can be matched, which cannot be well distinguished from random alignments. This effect is known in bioinformatics as “Inconsistency of Maximum Parsimony” (Felsenstein, 1978). The events can be the original state-action pairs, clusters thereof, or other representations of state-action pairs, e.g. indicating changes of inventory, health, energy, skills etc. In general, we define events as a cluster of states or state-actions. A sequence of events is obtained from a state-action sequence by substituting states or state-actions by their cluster identifier. In order to cluster states, a similarity measure between them is required. We suggest to use the “successor representation” (Dayan, 1993) of the states, which gives a similarity matrix based on how connected two states are given a policy. Successor representation have been used before (Machado et al., 2017; Ramesh et al., 2019) to obtain important events, for option learning. For computing the successor representation, we use the demonstrations combined with state-action sequences generated by a random policy. For high dimensional state spaces “successor features” (Barreto et al., 2017) can be used. We use similarity-based clustering methods like affinity propagation (AP) (Frey & Dueck, 2007). For AP the similarity matrix does not have to be symmetric and the number of clusters need not be known. State action pairs (s, a) are mapped to events e.
(II) Determining the Alignment Scoring System. Alignment algorithms distinguish similar sequences from dissimilar sequences using a scoring system. A scoring matrix S has entries si,j that give the score for aligning event i with j. The MSA score SMSA of a multiple sequence alignment is the sum of all pairwise scores: SMSA = ∑ i,j,i<j ∑L t=0 sxi,t,xj,t , where xi,t means that event xi,t is at position t for sequence τi = ei,0:T in the alignment, analog for xj,t and the sequence τj = ej,0:T , andL is the alignment length. Note thatL ≥ T and xi,t 6= ei,t, since gaps are present in the alignment. In the alignment, events should have the same probability of being aligned as they would have if we know the strategy and align demonstrations accordingly. The theory of high scoring segments gives a scoring scheme with these alignment probabilities (Karlin & Altschul, 1990; Karlin et al., 1990; Altschul et al., 1990). Event i is observed with probability pi in the demonstrations, therefore a random alignment aligns event i with j with probability pipj . An alignment algorithm maximizes the MSA score SMSA and, thereby, aligns events i and j with probability qij for demonstrations. High values of qij means that the MSA often aligns events i and j in the demonstrations using the scoring matrix S with entries si,j . According to Theorem 2 and Equation [3] in Karlin & Altschul (1990), asymptotically with the sequence length, we have si,j = ln(qij/(pipj))/λ∗, where λ∗ is the unique positive root of ∑n,n i=1,j=1 pipj exp(λsi,j) = 1 (Equation [4] in Karlin & Altschul (1990)). We can now choose a desired probability qij and then compute the scoring matrix S with entries si,j . High values of qij should indicate relevant events for the strategy. A priori, we only know that a relevant event should be aligned to itself, while we do not know which events are relevant. Therefore we set qij to large values for every i = j and to low values for i 6= j. Concretely, we set qij = pi − for i = j and qij = /(n− 1) for i 6= j, where n is the number of different possible events. Events with smaller pi receive a higher score si,i when aligned to themselves since this self-match is less often observed when randomly matching events (pipi is the probability of a random self-match). Any prior knowledge about events should be incorporated into qij .
(III) Multiple sequence alignment (MSA). MSA first produces pairwise alignments between all demonstrations. Then, a guiding tree (agglomerative hierarchical clustering) is produced via hierarchical clustering sequences, according to their pairwise alignment scores. Demonstrations which follow the same strategy appear in the same cluster in the guiding tree. Each cluster is aligned separately via MSA to address different strategies. However, if there is not a cluster of demonstrations, then the alignment will fail. MSA methods like ClustalW (Thompson et al., 1994) or MUSCLE (Edgar, 2004) can be used.
(IV) Position-Specific Scoring Matrix (PSSM) and Profile. From the final alignment, we construct a) an MSA profile (column-wise event frequencies qi,j) and b) a PSSM (Stormo et al., 1982) which is used for aligning new sequences to the profile of the MSA. To compute the PSSM (column-wise scores si,t), we apply Theorem 2 and Equation [3] in Karlin & Altschul (1990). Event i is observed with probability pi in the data. For each position t in the alignment, we compute qi,t, which indicates the frequency of event i at position t. The PSSM is si,t = ln(qi,t/pi)/λ∗t , where λ ∗ t is the single
unique positive root of ∑n i=1 pi exp(λsi,t) = 1 (Equation [1] in Karlin & Altschul (1990)). If we
align a new sequence that follows the underlying strategy (a new demonstration) to the profile model, we would see that event i is aligned to position t in the profile with probability qi,t.
(V) Reward Redistribution. The reward redistribution is based on the profile model. A sequence τ = e0:T (et is event at position t) is aligned to the profile, which gives the score S(τ) = ∑L t=0 sxt,t. Here, si,t is the alignment score for event i and xt is the event of τ at position t in the alignment. L is the profile length, where L ≥ T and xt 6= et, because of gaps in the alignment. If τt = e0:t is the prefix sequence of τ of length t+ 1, then the reward redistribution Rt+1 for 0 6 t 6 T is
Rt+1 = (S(τt) − S(τt−1))C = g((s, a)0:t)− g((s, a)0:t−1), RT+2 = G̃0 − T∑ t=0 Rt+1,
(24) where C = Edemo [ G̃0 ] / Edemo [∑T t=0 S(τt)− S(τt−1) ] and G̃0 = ∑T t=0 R̃t+1 is the original
return of the sequence τ and S(τt−1) = 0. Edemo is the expectation over demonstrations, and C scales Rt+1 to the range of G̃0. RT+2 is the correction of the redistributed reward (Arjona-Medina et al., 2019), with zero expectation for demonstrations: Edemo [RT+2] = 0. Since τt = e0:t and et = f(st, at), we can set g((s, a)0:t) = S(τt)C. We ensure strict return equivalence, since G0 =∑T+1 t=0 Rt+1 = G̃0. The redistributed reward depends only on the past, that is, Rt+1 = h((s, a)0:t). For computational efficiency, the alignment of τt−1 can be extended to one for τt, like exact matches are extended to high-scoring sequence pairs with the BLAST algorithm (Altschul et al., 1990; 1997).
Sub-tasks. The reward redistribution identifies sub-tasks, which are alignment positions with high redistributed reward. It also determines the terminal states and automatically assigns reward for solving the sub-tasks. However, reward redistribution and Align-RUDDER cannot guarantee that the reward is Markov. For redistributed reward that is Markov, the option framework (Sutton et al., 1999), the MAXQ framework (Dietterich, 2000), or recursive composition of option models (Silver & Ciosek, 2012) can be used as subsequent approaches to hierarchical reinforcement learning.
A.4 SEQUENCE ALIGNMENT
In bioinformatics, sequence alignment identifies regions of significant similarity among different biological sequences to establish evolutionary relationships between those sequences. In 1970, Needleman and Wunsch proposed a global alignment method based on dynamic programming (Needleman & Wunsch, 1970). This approach ensures the best possible alignment given a substitution matrix, such as PAM (Dayhoff, 1978) or BLOSUM(Henikoff & Henikoff, 1992), and other parameters to penalize gaps in the alignment. The method of Needlemann and Wunsch is of O(mn) complexity both in memory and time, which could be prohibitive in long sequences like genomes. An optimization of this method by Hirschberg (1975), reduces memory to O(m+ n), but still requires O(mn) time.
Later, Smith and Waterman developed a local alignment method for sequences (Smith & Waterman, 1981). It is a variation of Needleman and Wunsch’s method, keeping the substitution matrix and the gap-scoring scheme but setting cells in the similarity matrix with negative scores to zero. The complexity for this algorithm is of O(n2M). Osamu Gotoh published an optimization of this method, running in O(mn) runtime (Gotoh, 1982).
The main difference between both methods is the following:
• The global alignment method by Needleman and Wunsch aligns the sequences fixing the first and the last position of both sequences. It attempts to align every symbol in the sequence, allowing some gaps, but the main purpose is to get a global alignment. This is especially useful when the two sequences are highly similar. For instance:
ATCGGATCGACTGGCTAGATCATCGCTGG CGAGCATC-ACTGTCT-GATCGACCTTAG * *** **** ** **** * * *
• As an alternative to global methods, the local method of Smith and Waterman aligns the sequences with a higher degree of freedom, allowing the alignment to start or end with gaps.
This is extremely useful when the two sequences are substantially dissimilar in general but suspected of having a highly related sub region.
ATCAAGGAGATCATCGCTGGACTGAGTGGCT----ACGTGGTATGT ATC----CGATCATCGCTGG-CTGATCGACCTTCTACGT------*** ************ **** * * ****
A.4.0.1 Multiple Sequence Alignment algorithms. The sequence alignment algorithms by Needleman and Wunsch and Smith and Waterman are limited to aligning two sequences. The approaches for generalizing these algorithms to multiple sequences can be classified into four categories:
• Exact methods (Wang & Jiang, 1994). • Progressive methods: ClustalW (Thompson et al., 1994), Clustal Omega (Sievers et al.,
2014), T-Coffee (Notredame et al., 2000). • Iterative and search algorithms: DIALIGN (Morgenstern, 2004), MultiAlign (Corpet, 1988). • Local methods: eMOTIF (Mccammon & Wolynes, 1998), PROSITE (Bairoch & Bucher,
1994).
For more details, visit Sequence Comparison: Theory and methods (Chao & Zhang, 2009).
In our experiments, we use ClustalW from Biopython (Cock et al., 2009) with the following parameters:
clustalw2 -ALIGN -CLUSTERING=UPGMA -NEGATIVE " \ "-INFILE={infile} -OUTFILE={outfile} " \ "-PWMATRIX={scores} -PWGAPOPEN=0 -PWGAPEXT=0 " \ "-MATRIX={scores} -GAPOPEN=0 -GAPEXT=0 -CASE=UPPER " \ "-NOPGAP -NOHGAP -MAXDIV=0 -ENDGAPS -NOVGAP " \ "-NEWTREE={outputtree} -TYPE=PROTEIN -OUTPUT=GDE
where the PWMATRIX and MATRIX are computed according to step (II) in Sec. 3 of the main paper.
A.5 EXTENDED RELATED WORK
Align-RUDDER allows to identify sub-goals and sub-tasks, therefore it is related to hierarchical reinforcement learning (HRL) approaches like the option framework (Sutton et al., 1999), | 1. What is the focus of the paper regarding Sequence Alignment technique and its contribution to redistributing rewards?
2. What are the strengths of the proposed approach, particularly in its application in Rooms and MineRL environments?
3. What are the weaknesses of the paper, especially regarding its comparisons with other works and explanations of key concepts?
4. How can the effectiveness of the new reward redistribution technique be better evaluated and validated?
5. What are some suggestions for improving the presentation of the paper, including figures and explanations? | Summary Of The Paper
Review | Summary Of The Paper
The paper uses Sequence Alignment technique to redistribute rewards, to a similar effect as with LSTM in RUDDER. The hierarchical agent is trained with behavioral cloning and fine-tuned with RL (tabular in Rooms environment / PPO in MineRL). Tasks are automatically divided into subtasks and specialized agents used for the subtasks. The main contributions seem to be: a) introduction of the Sequence Alignment technique which works well with few expert demonstrations; b) experimental demonstration in Rooms / MineRL.
Review
While the results in Minecraft are presented as impressive, the paper contains several shortcomings from the strict scientific point of view. The key question - is the new method working better than the original RUDDER and why? - is poorly addressed. The comparison is made only on two different GridWorld problems, and it is unclear whether the benefit is from the new reward redistribution technique or division to subtasks. An ablation study and experiments on different domains, perhaps those used in the original RUDDER paper, are required to find the answer.
Moreover, some key concepts of the paper are poorly explained. Section "Defining Events" does not explain how the clustering is exactly done. A function "f(s, a)" remain undefined. I don't agree to the authors' claim that "Q-function of an optimal policy resembles a step function" and "is mostly constant". Presentation of the paper can be improved: It is not clear why the biological sequences in Fig. 2 left is included. Fig. 6 include letters on x-axis without any explanation about their meaning. Grammar and typography can be improved. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.